Make Tech Easier » Hardware Uncomplicating the complicated, making life easier Thu, 07 Nov 2013 06:45:24 +0000 en-US hourly 1 3 Inconvenient Truths About Gaming Computers Tue, 05 Nov 2013 00:25:22 +0000 While building a gaming computer, many people have the wrong perception of the importance of PC gaming hardware choices. Let's explore a few of these problems.

The post 3 Inconvenient Truths About Gaming Computers appeared first on Make Tech Easier.

gaming computerOne of the reasons people modify their computers extensively is to be able to play hardcore games. Who doesn’t like having complete control over what games they play, and what graphics they have? The release of Grand Theft Auto V on Xbox 360 has left much to be desired, since the console’s graphics system was slightly sub-par compared to what one would consider at the time a high-end computer. What if Rockstar Games would have released a version for PC on the same day? Would people still buy the Xbox 360 version? While the PC can far outperform gaming consoles, sometimes the day those consoles see a store shelf, there still is a lot wrong with people’s perceptions of the importance of PC gaming hardware choices. It’s time we explored a few of these problems.


My rule of thumb is: If I can get an average of anywhere between 25-30 frames per second from my games, my graphics card is fine. People obsess compulsively over the frame rate their cards deliver as opposed to focusing on hitting a sweet spot and sticking with it. The most difficult thing for most people to accept is that their investments were in vain. This is particularly true of those who pay more than $1300 for a graphics card. Usually, graphics cards in the range of $300-500 work just fine with most games. It’s not your job to get a card that works with a new game. You’ll see this theme recurring a lot in this article.


The truth is that most games don’t rely so heavily on your CPU. They need your GPU (graphics card) more since that’s what they use to render graphics. A dual-core heavyweight can perform just as well as a quad-core counterpart. You don’t really need a $1000 CPU to get your game rolling. And if you do, something’s seriously wrong with the developer. Here’s a secret: In all likelihood, any of Intel’s earliest iterations of the i7 can outperform most consoles of its day. If a game developer releases a PC version and a console version, but the PC version is chewing on your CPU relentlessly, you can bet they really don’t care about their PC users. Don’t let them do that.


You’ll accomplish nothing with a gaming keyboard/mouse unless:

  1. You’re used to handling tons of different specialized buttons in your daily life (or you don’t mind the learning curve), or
  2. You’re playing games with so many complex functions (read: World of Warcraft) and you can’t really do without the special keys.

Let’s be clear, though: When I talk about gaming equipment, I’m talking about the hardcore stuff, like Logitech’s G19 keyboard or Razer’s Ouroboros mouse. Don’t get me wrong. They’re good, for the purposes they serve. However, for most gamers, a sturdy, decent keyboard and mouse combination works without a hitch. You don’t need to go out and spend $800 on the latest crazy gear.

The gaming world has gone a bit far in terms of hardware. If you want to disregard everything mentioned above and future-proof your PC, go ahead, but do a cost-benefit analysis first. Is it more feasible to have that shiny new graphics card today for $1500, or would you rather buy it in 3 years when its price has dropped to $400? If you feel compelled to forego what I said, that’s your right. But take some time. Think about things a little bit. Be patient.

And if you have any other suggested inconvenient truths, please leave a comment below. We’d all love to see it!

The post 3 Inconvenient Truths About Gaming Computers appeared first on Make Tech Easier.

]]> 2
3 Questions About Fiber Optics Finally Answered Mon, 28 Oct 2013 23:25:29 +0000 You're probably wondering why your ISP is not switching to fiber optics and providing faster connection speed. We will take a detail look at the situation and answer all of these questions in a way that's easy to understand!.

The post 3 Questions About Fiber Optics Finally Answered appeared first on Make Tech Easier.

fiberoptic-cable-thumbFiber optic internet is a rapidly-growing sector of the telecommunications industry. You probably are reading this article through such a connection. And if you’re not, you’re probably wondering why your internet service provider (ISP) has not taken the steps necessary to facilitate this kind of connection for you. What makes a fiber optic connection so much faster than a copper Ethernet-based network? Is there a limit at all to how much bandwidth can pass through fiber? How come some countries have faster internet than others? I’ll try to answer all of these questions in a way that’s easy to understand!


The era of Ethernet is dimming. But why do people prefer fiber optic better than Ethernet? ISPs seem to be in a hurry to upgrade their infrastructures.

First of all, Ethernet isn’t going anywhere. ISPs don’t operate on Ethernet. That’s just the 8-pin copper connection you use to connect to the internet. It’s the end of the connection, where your computer is connected to its wall socket or router. ISPs actually operate on a series of copper cables designed to carry signals at varying frequencies. Some of them have chosen to replace all these cables with optical cables, which for some reason are superior.

Instead of carrying data through electrical currents, which attenuate (diminish) over a certain distance, fiber optic cables carry pulses of light through clear fiberglass or other materials enclosed in shielded material. Here, we see a lack of attenuation, and a much higher potential for shortening the pulses transmitted. Since you can send shorter pulses, you can increase the amount you send per second. Therefore, fiber has a much higher potential bandwidth capacity than copper.

Fiber optic cables don’t flinch when sending a signal over a distance of 200 kilometers (~120 miles). Copper struggles to send signals at distances of even 20 km (~12 miles). With copper, you also have to measure changes in the electromagnetic field of each wire, while with fiber, you only have to translate pulses of light into electric signals. In other words, it’s much more efficient with long-distance high-speed communication. What’s not to like?

fiber optic-speed

This question is difficult. Usually, fiber cables carry an average of a few terabytes of data per second. Some can go up to hundreds of terabytes per second. You can theoretically send one pulse per femtosecond, or 0.000000000000001 seconds, or so.


For one, it’s very difficult to change to fiber. If the whole investment was in the cables, ISPs would be in on it without a hitch. But there are a lot of other things to consider. The hardware that routes signals to their destinations, the processing units, and lots of other equipment has to be replaced. The old copper equipment will have to be scrapped. Copper cables have to be disposed of properly. There are many things that can present roadblocks in the adoption of fiber optics.

But perhaps the biggest potential obstacles to fiber adoption (especially in the US) are government and public/private bureaucracies. It’s no coincidence that countries with the freest telecommunications markets – such as Hong Kong, Japan, Romania, Singapore, Switzerland, and Latvia – also have the greatest surpluses in bandwidth. In these countries, it’s not uncommon to see someone with a $50 plan using a line that’s faster than what their computers can handle; something in the order of 500 Mbps or higher.

Needless to say, each country has unique obstacles and different terrain to overcome. There’s no cookie-cutter model for what a country must do to quickly adopt fiber internet. For example, some countries have freed-up telecom markets, but their operators are too poor to take the leap.

Nothing spoils you more than downloading a massive 6 GB database in a few minutes. This will perhaps one day become a reality for most of the world. But, for now, we patiently wait while we navigate the stormy territory of red tape and financial burdens. For those of us who are lucky enough to look at this text from behind a fiber-connected computer, it’s time to take a moment and appreciate what we have.

If you’d like to ask more questions about fiber, give it a shot in the comments section below!

The post 3 Questions About Fiber Optics Finally Answered appeared first on Make Tech Easier.

]]> 3
7 Technology Myths That Cost You Money Fri, 18 Oct 2013 23:25:36 +0000 Technology myths are everywhere. On occasions, it costs you money and make you poorer. Let's take a look at some of the technology myths that cost you money.

The post 7 Technology Myths That Cost You Money appeared first on Make Tech Easier.

technology-myths-thumbTechnology myths are everywhere. You might read about it from the Web, or from retailers who are trying to push big unwanted product out of their shops. In some cases, these can be harmless. On occasions, it can cost you money and make you poorer. Let’s take a look at some of the technology myths that cost you money.

When you are evaluting software to use for your projects, don’t write off open-source software. Most of the time, they are as competent, or even better, than those commercial software.


Most software companies want you to believe that their product is the best and the open source alternative is not up to par to their offering. There is some truth to this, but it is not always the case. For popular open-source software, like Android, the more people collaborate and work on it, the better is the product, and not to mention that it is free.

A 2.1 MP photo is clear enough to use as a wallpaper while a 4MP photo is good enough for a 16×20 inches print. We have shown you that it takes more than the pixels to produce a great photo, but sadly to hear, retailers are using the megapixel as the selling points for their cameras. For a simple point and shoot camera, 8MP is more than sufficient.

You might be laughing at your friend for the $700 PC that he built himself, thinking that it is not up to par to the $1500 PC you got from Best Buy. You might be surprised to find that the $700 PC could be better – hardware specification, functionality and performance wise. Most PCs in the store come with generic configuration for the mass market. Unless you are willing to break the bank, you won’t be able to get a customized PC to do heavy-duty work. In additional, the Windows OS installed on these PCs often come with a lot of crapware that can further slow down its performance.


On the other hand, when you built your own PC, you can compare the various hardware and get the best one that suit your needs and is within your budget. You can also choose the OS of your choice – Windows, Linux or Mac OS X (hackintosh), which doesn’t come with any crapware.

If you have purchased an anti-virus software that cost $100+ or more, I am sorry to inform you that some of the best anti-virus software out there are free. Most anti-virus software, free and paid, can do a good job in detecting virus and malware. The only point of failure is the user, which is you. If you always click on link, or run exe file without checking the source, even the most expensive anit-virus software out there can’t prevent your computer from being infected.

When you buy a computer, mobile phone, or any electronic device, the retailer will often upsell you with an insurance or extended warranty. Is it really necessary? We don’t think so, particularly for small item that get obsolete very fast. Consider the cost of all those warranties that you never use and the probability that you need a repair outside of warranty period, it just doesn’t make the insurance (or extended warranty) a worthy deal.


Of course, for big expensive item that costs a hand and leg to repair, and one you expect to use it for years, extended warranty is one thing you surely need to consider.

It all depends on the screen resolution. If you are getting a 30-inch (or bigger) monitor with a minimum resolution of 2048×1152 (better still, 2560×2048), then you are good to go. However, if you are still stuck with the 1920×1080 resolution for a 27 inch monitor, instead of clear images, you will get pixelated images (where each pixel is stretched to cover more screen space). When you are getting a monitor, you hav to look beyond the screen size.


Have you ever seen a $3 HDMI cable selling in Amazon and another selling at $2000? You might be tempted to think that the $2000 cable can give you a better quality that justifiy the 66666% difference in price. The truth is, the $3 cable will work just as fine without any noticeable difference in quality.

The above list of technology myths is definitely not conclusive. There are tons of technology myths out there that cost us money. It is pertinent for us to think and do research before we pay for the bigger or more expensive product.

Image credit: How university open debates and discussions introduced me to open source, coupons, Installing Computer Parts

The post 7 Technology Myths That Cost You Money appeared first on Make Tech Easier.

]]> 7
5 Common Laptop-Killing Practices Most of Us Are Guilty Of Mon, 14 Oct 2013 23:25:06 +0000 Do you own an expensive laptop? You will probably want to avoid these laptop killing practices. They will apply for smartphones and tablet as well.

The post 5 Common Laptop-Killing Practices Most of Us Are Guilty Of appeared first on Make Tech Easier.

laptop-killing-practices-thumbUnless you have a mortal grudge on your laptop, you certainly don’t want to kill it. Laptops are expensive, and you often try to make sure you do everything you can to keep it safe from an uneventful death. Although people have all the incentives to keep their laptops away from danger, they often do so. These little devices suffer many issues because of the limited amount of space their components reside in and the convenience presented by their portability. The mistakes I’m about to present doesn’t apply strictly to laptops; some of them are also useful for smartphones and tablets.

Although it’s called a “laptop,” its place is not on the top of your lap, or any soft surface, for that matter. Laptops are specifically designed to allow air to flow through the bottom and sides. They have little rubber stoppers on the bottom for the purpose of lifting the laptop slightly off the surface it’s on. If you put the laptop on your lap, you may block airflow through the bottom and insulate heat.

If you don’t like using laptops on tables, get a portable multipurpose laptop stand like the one below.


The urge to drink something is quite common when you spend long periods of time on the web. You often might have a glass of water or some other drink nearby. While you’re looking at the screen, you might miss the glass while reaching for it and spill it all over your laptop. In my case, one of my cats did the job for me on a $1,500 laptop.

The moral of the story is: If you don’t want your laptop to die from a short circuit on its motherboard, don’t give it the possibility to drink whatever it is you’re sipping on.


Generic laptop chargers are often much cheaper than what most manufacturers offer as replacements. For this reason, they became quite popular with people who travel a lot and end up leaving their original chargers in the hotel room when they leave.

Each laptop has a specific amperage and voltage requirement for direct current (which translates into wattage). If you go over the requirement just slightly, the laptop might shrug it off and just charge. The problem is that some generic chargers go over by quite a lot. If a laptop gets overloaded, the battery will literally fry. Some batteries even leak gas or explode. In short, don’t charge your laptop (ever!) with a generic charger, unless you’re very well-versed in how electricity works and bother to check the specifications on the charger and battery.


I get it. You’re in a hurry and don’t have the time to bother with unplugging the charger. Actually, I don’t. It’s really tragic that people do this when one simple hand movement could prevent such a mess. When you put the laptop into a confined space with the charger attached, the space might not account for this extra occupation. This presses the charger’s plug into the laptop’s socket, eventually bending it and possibly desoldering the socket’s housing from the main board. What you’ll end up with is a laptop that will never charge again. Good luck repairing that for less than a fortune.

When you close a laptop’s lid, it goes through a shutdown sequence (either hibernation, or a full shutdown, depending on what you configured). While it’s doing that, the hardware is still on. I’m particularly concerned about the hard drive, which uses mechanical parts to read and write data. The parts on the hard drive are situated at such a small distance, that even the tiniest shock can completely obliterate it while the discs inside it are still spinning.

If your hard drive crashes, you’ll lose all of your data. That’s a consequence of impatiently putting your laptop through stress right after closing its lid. Instead, wait for the laptop to finish what it’s doing before laying a finger on it.

Read each of these mistakes again. You’ll see a recurring theme that sums everything up: Keep your laptop away from things that can harm it, treat it with delicacy, and maximize its airflow as much as possible. That’s the gist of it! If you’d like to suggest something else, you’re welcome to comment below.

The post 5 Common Laptop-Killing Practices Most of Us Are Guilty Of appeared first on Make Tech Easier.

]]> 17
What Does Hibernating Do To Your Computer’s Performance? Fri, 11 Oct 2013 23:25:40 +0000 You probably have seen the "Hibernation" button when shutting down your PC. Do you know what it does and how does it affect your PC's performance?

The post What Does Hibernating Do To Your Computer’s Performance? appeared first on Make Tech Easier.

windows8-hibernate-optionBack in the days when computers were still running on 2 GB hard disks, the general way you would turn off a computer and save its current state without shutting it down completely was called “standby.” Now, computers have newer, more creative little nicknames for this mode, each with their own functions that differ from the original “standby” mode (and sometimes don’t). Among these is hibernation. What does it do? How does it help you? And, most importantly, how does it affect your computer’s performance? Hibernation is a concept understood poorly understood by most day-to-day computer users, and when presented with more options, they do not know what exactly to choose.

When you shut down the computer (in other words, you click “Shut down,” not “Sleep” or “Hibernate”), it closes all your programs, stops system services, stops the operating system, and then cuts off power to the computer. When you boot up again, your computer starts up a clean slate and only runs programs that were supposed to run on startup. Hibernate, an option that exists on battery-reliant Windows systems, is an alternative to “sleep.” So, we must first find out what “sleep” is.


If you put your computer in “Sleep” mode, the computer just cuts power to all hardware except RAM, much like “standby” did in all versions of Windows from 98 to Server 2003 (this includes XP, for consumers). Why does it refuse to cut power to RAM? That’s because your RAM contains all the program data from applications you were running before you put the computer to sleep. Obviously, this means you have to maintain a constant supply of power to your computer, which is why this is preferable for desktop systems that supply power from wall outlets.

Hibernation is an alternative that allows you to store all the content from RAM on your hard drive. Instead of continuously supplying power to RAM, Windows flushes everything you have in RAM onto your hard drive and then shuts off the computer. Since your hard disk writes data magnetically, it doesn’t need a continuous power supply (yippee!) – something that makes hibernation a valid alternative for battery-operated devices. When you turn on your computer again, it will restore all that data it just wrote back into RAM and remove it from the hard drive. You’ll see all your open programs again just like you did with sleep mode.

Overall, your computer will be just as fast (or slow) as it was when you put it to hibernate. The difference is in the booting process. Hard drives (and even solid state drives) are much slower than RAM. Since “Sleep” mode preserves the RAM “as-is,” it often starts up much faster than hibernation. Hibernating computers must take the effort to read magnetic data from their hard drives and then write these values into RAM, making the whole process very tedious.

hibernate performance - hard drive magnet

Even though hibernation is a slowpoke, it’s significantly useful in cases in which you do not plan to supply your laptop with power from your outlet. Putting the computer to sleep will eventually drain the battery much faster than hibernation does. The reasons should be obvious by now. So, if you’re leaving the laptop on battery power, put it to hibernate!

If you’re in desperate need of more information and can’t find anything on search engines, ask a question in the comments section. I lurk around and answer questions pretty quickly!

Image credit: French bulldog puppy sleeping with teddy bear by BigStockPhoto

The post What Does Hibernating Do To Your Computer’s Performance? appeared first on Make Tech Easier.

]]> 16
Does a Better CPU Lead to a Better Phone? 4 Big Mobile Processor Questions Answered Mon, 07 Oct 2013 23:25:45 +0000 Most manufacturers like to boost how fast their mobile processors are, but is it translating better smartphone performance?

The post Does a Better CPU Lead to a Better Phone? 4 Big Mobile Processor Questions Answered appeared first on Make Tech Easier.

smartphone-processor-thumbWe’ve pretty much got the whole computer processing paradigm figured out, especially since Intel has made it much easier with its different i-series chips. But, in the mobile world, there’s still a ton of mystery surrounding the processing power of phones and tablets. First off, there are many different types of chips out there for these small devices. Nvidia has its Tegra series, Qualcomm has its Snapdragon processors, and ARM makes the CPUs these are all based on, also known as the Cortex series. How are we to distinguish between each? And is the mobile processor similar to the PC’s CPU? Or are we looking at a whole new beast? We will answer these questions, and dive a little further into the realm of processor architectures and app development.

From a raw material perspective, CPUs are virtually all made of the same materials. Silicon has enabled us to create ever-smaller components that pack bigger punches. But if we learned anything from desktop computers, it’s that the performance from an AMD processor differs from that of an Intel processor. Mobile CPUs operate in quite the same manner. Even though most of them are literally children of the ARM family, you’d be mistaken to say that they’re all the same.

In many instances, a high-end dual-core processor can perform better than a low-end quad-core chip. It all depends on the architecture that’s etched into the silicon. ARM has a chart on this, which I’ll show you below:


Processors based on Cortex architecture have some improvements that make them at least somewhat superior.

Performance on mobile devices does not entirely depend on the CPU. While the CPU serves a very high purpose on the device, it still has to wait for other hardware to respond. If your RAM or storage is cheaply-made and doesn’t have what it takes to catch up to the CPU, you’re in for a nasty surprise. On phone and tablet specification sheets, you’ll rarely see how fast the other hardware is. They always use the CPU as a selling point, which is at the very least a bit of an annoyance for those who would like to see exactly what it is that they’re buying.

The biggest drags on mobile CPUs are attached storage, internal storage, and random access memory (RAM). That’s pretty much the rest of the phone. After purchasing a phone, the only thing you have control over is attached storage (such as microSD cards). Be sure to get Class 10 microSD cards to make sure you don’t bog down the phone. Other than that, all you can do to see the merits of your potential purchase is to view or read reviews on the product.

Just like on your PC, not all mobile applications were developed to run on more than one core. Multitasking may be enhanced by more cores, but you’ll notice the difference only on apps designed to run in these environments. Apps like Facebook, Twitter, Kindle, and Financius won’t see significant upticks in speed.


Your YouTube app and games will probably see relief with more cores to play around with. So, frankly, you’ll only really see improvements with multimedia and gaming apps. Nothing else seems to need a high amount of multitasking.

It depends. Some processors can shut off some of their cores when they’re not being used. The Tegra 3 is a good example of this. When a processor uses all of its cores to process data all of the time, it will siphon significant amounts of power, negatively impacting battery life.

The further you dive into the mobile processing world, the more you realize that the rules governing this world are quite similar to those governing PC processors. Clock speed and cores don’t compose everything. There’s much more to look at, and reading a little bit of material on each processor will unlock a wealth of knowledge that could prove its superiority to other models that try to sell themselves to you. And to answer the question, a better CPU might not translate to better smartphone performance. There are other factors that can determine the smartphone’s performances as well.

If you have a question regarding a mobile processor, post a comment and you will find an answer!

Image credit: SETTING A NEW STANDARD IN TABLETS and Tegra 4 – Chip Shot

The post Does a Better CPU Lead to a Better Phone? 4 Big Mobile Processor Questions Answered appeared first on Make Tech Easier.

]]> 1
4 Myths About Computer Monitors You Might Believe Fri, 04 Oct 2013 23:25:45 +0000 When buying a computer monitor, the truth is that most people have no idea how they're being duped into buying displays that throw fancy numbers at them. For this reason, I'll point out the biggest myths revolving around these displays.

The post 4 Myths About Computer Monitors You Might Believe appeared first on Make Tech Easier.

monitor myths-desk-thumbEven to a refined aficionado, computer monitors are still a realm shrouded in smoke and mirrors. Manufacturers like to attach specs to their displays, which usually consist of numbers that give you the impression that “bigger is better.” When you buy a monitor, you’re purchasing a window into your computer. Without it, you’re (literally) running blind. Aside from its necessity, a monitor has more utility if it can display colors and video clips in a way that’s pleasant and easy on the eyes. The truth is that most people have no idea how they’re being duped into buying displays that throw fancy numbers at them. For this reason, I’ll point out the biggest myths revolving around these displays.

monitor myths-120hz

One of the latest selling points for monitors is their refresh rates. This is all fine and dandy, until you realize that most movies play at 24 frames per second. Either way, your eyes aren’t conditioned to notice the difference between a standard 60 Hz (on an LCD/LED monitor) and the new fancy 120 Hz rate. Want to try this out for yourself? Go to a computer hardware store and check out the demo monitors. They’re probably all running the same clip. Now, go up to a monitor with a 60 Hz refresh rate and then switch over to one at 120 Hz. Did you notice a difference? It’s likely that any difference you’ll see has to do with other aspects of the monitors (like other image processing hardware that removes motion blur from images). For best results, compare two monitors from the same brand and, if possible, the same product line.

The 60/120 Hz difference was noticeable back in the days of CRT monitors, which flickered at lower frequencies and tired your eyes. LCD/LED monitors do not suffer from this shortcoming.

monitor myths-responsetime

The response time of a monitor measures the amount of time it takes for a signal to be processed. So, if you perform an action (like moving your mouse), there will be a very tiny delay between your hand moving the cursor and the monitor actually displaying that movement. This time is measured in milliseconds.

Yes, it is better to have a 10 ms response time than a 30 ms one. If you try operating with a monitor that has a high response time, you’ll immediately notice the annoying delays in almost anything. This is especially present when you’re gaming. But your eyes also have a limited response time to changes in their environment, which is usually around 10 ms. Any response time on a monitor below 10 ms, in consequence, wouldn’t really affect you.

monitor myths-contrastratio

Throw away everything you’ve heard about contrast ratios from a sales rep. Most of it is garbage, for lack of a better word. If you’ve ever been enticed into buying a monitor with a 5,000,000:1 contrast ratio, put that money back in your pocket and read this. Most LCD monitors produce anywhere between 1000:1 and 1500:1 ratios.

Wait a minute… Do you know what a contrast ratio is?

Contrast ratios demonstrate the difference between black and white luminance on monitors. This means that monitors with high contrast ratios tend to have brighter whites and blacker blacks. In reality, there’s a lot that’s being done in this field to improve these ratios. However, you’ll have to test out a monitor for yourself to really see what it can do. Unfortunately, the contrast ratio on the box is just a formality that only shows the dynamic ratio, and dynamic ratio is just a fancy way of simulating darker blacks with mediocre effects.

Monitors with dynamic contrast will play dark scenes in movies horribly. Some objects that should be bright are instead darkened along with the rest of the background. Sometimes, it’s even worse, with entire scenes almost completely black.

If the monitor shows blacks and whites at near-perfection compared to your old one, and there’s no confusion when using black backgrounds, pull the money out of your pocket again and buy it.

monitor myths-bigmonitor

Who doesn’t like a big screen? If you can afford it, you’re tempted to buy it. But I’m advising you to grip your wallet tightly until you’re finished reading this. Most monitors (even high-end models) support a 1920×1080 resolution otherwise known as 1080p. By the time you get a screen larger than 27 inches, you will notice some pixelation. This is because there are not enough pixels in the resolution itself to produce a smooth image that runs through all pixels uniformly. It may be more noticeable as you sit closer to your monitor.

If you plan to buy a 30-inch screen, make sure it supports a resolution of 2048×1152. It’s still widescreen, and it’s called the quad wide extended graphics array (QWXGA), or . For enormous screens (like those you’d normally see at the Olympics), get one that supports a resolution of at least 2560×2048, also known as the quad super extended graphics array (QSXGA); or the latest resolution to hit shelves, known as 4K (3840×2160)

Having a resolution proportional to the size of your screen will give you ample viewing possibilities and a beautiful, crisp image. Good luck finding monitors that support such resolutions, though!

Get something that pleases you. Test it out. Abuse it (reasonably, of course). Make sure that you don’t regret ever buying it. And, most importantly, ignore the evil little numbers that misguide you into buying something that, in reality, has less to offer than it advertises. If you have monitor questions, the comments area is open for you at any time.

The post 4 Myths About Computer Monitors You Might Believe appeared first on Make Tech Easier.

]]> 6
The Beginner’s Guide to Smartphone Camera Fri, 27 Sep 2013 23:25:16 +0000 If you plan to use your phone to take pictures and video, you'll need a phone from a manufacturer that prioritizes the camera. Unless you get this, you'll be missing out completely on features that can make your pictures look professional.

The post The Beginner’s Guide to Smartphone Camera appeared first on Make Tech Easier.

phonecamera-lumia1020-thumbSmartphone cameras are probably the most sophisticated little devices out there. For their size, it’s very difficult not to stare in amazement at the intricate ingenuity that goes into some phones. While your regular mid-range phones may have a simple 1080p camera, there are so many other camera features on different high-end phones that can make the pictures brilliant. If you plan to use your phone to take pictures and video, you’ll need a phone from a manufacturer that prioritizes the camera. Unless you get this, you’ll be missing out completely on features that can make your pictures look professional.


The whole “smartphone vs. camera” debate has been raging ever since high-end back-lit CMOS cameras started appearing on regular phones. So, what’s better?

The answer is: It really doesn’t matter. All of it depends on the reasons you might want to get a phone with a good camera as opposed to a piece of hardware completely dedicated to taking photographs. People who are serious about photography might get a decent DSLR camera, but still might want to dump some cash on a smartphone with good optics simply because it’s less bulky. You might not be carrying your bulky rig with all its attachable lens around when a great photo opportunity presents itself. In those cases, it’s very useful to have a powerful camera in your pocket.

You just can’t drag all of this around every time you walk out of your house:


One of the biggest disadvantage to phones is the fact that there’s still no way to use attachable lens and you’re pretty much stuck with whatever the hardware can provide. Still, I cannot stress enough the convenience of having a little backup camera in those once-in-a-lifetime moments.

If a phone doesn’t give you any information about its aperture or focal length, you have no way of telling whether it has a camera that meets your liking. Usually, phones that don’t show any indication in their specs other than the resolution are not putting any priority on their cameras.

Since you’re limited to whatever optical specifications the manufacturer provides for your lens, it’s not a bad idea to try to find something that suits your liking and provides the optical experience you are accustomed to taking pictures with.

For people who are not experienced with cameras, here are a few pointers:

  • A bigger focal length means that you’ll cover less area in the picture. The simplest way to describe focal length is by comparing it to zoom. The higher the focal length, the more “zoomed” the camera is. Smaller focal lengths mean you’ll have wider angles. Nikon has a decent guide on this if you’d like to know more in-depth information. Focal lengths are measured in millimeters. The typical optics on a phone have somewhere between 20 and 30 mm of focal length.
  • The aperture (focal ratio) determines how much light enters the camera. This ratio is notated with a fancy-looking italic lowercase “F”, known as an “f-number”. A higher f-number represents a smaller aperture, which captures more light. This is important for special shots that put objects in focus. For example, compare the two images below:



The top image is taken using a small aperture, and the bottom image is taken using a large one. On some phone cameras, the shutter will take care of this by moving ever so slightly just before taking a picture to modify the aperture. Similarly, focal length is also adjusted through optical zoom.

Ultimately, a good smartphone camera will have all these things. It will have the ability to zoom by moving the optics (adjusting the focal length) and change the aperture with a mechanical shutter. Since the cameras are digital, there has to be a decent on-board backlit CMOS sensor to construct these images with great accuracy. After all that, you can worry about resolution. But, anyway, a good camera will also have a decent resolution, althoug you shouldn’t make a big fuss about anything more than 5 megapixels.

The first thing that comes to mind as far as cameras are concerned is the Nokia Lumia 1020, with its brilliant 41-megapixel camera, its special software, and its spectacular CMOS and optics. There’s also the Samsung Galaxy S4 Zoom (a phone with an integrated full-blown PAS – point-and-shoot – camera), and the regular S4. The HTC One and iPhone 5S are close runners up.

Whether you’ve got a thought to add to this or a question about cameras, everything is welcome in the comments section below!

The post The Beginner’s Guide to Smartphone Camera appeared first on Make Tech Easier.

]]> 3
MTE Explains: 4 Questions About RAM Finally Answered Fri, 13 Sep 2013 23:25:08 +0000 Most people know what random access memory (RAM) is, but not its technical detail, This article will clear up some common questions regarding RAM that generally don't have clear answers.

The post MTE Explains: 4 Questions About RAM Finally Answered appeared first on Make Tech Easier.

ramquestions-thumbMost people know what random access memory (RAM) is, but when asked about the technical detail, most of them are not sure, or have no idea about it. In this segment, we’ll clear up some common questions regarding RAM that generally don’t have clear answers.

RAM is a form of volatile memory that gets erased once it’s no longer receiving electrical power. There are several benefits to this:

1. It is faster to write data onto a blank slot than to delete data from a filled slot and refill it up with another set of data. Since the data it loads to the RAM is different for each session, it is pointless to store those data permanently in the RAM only to delete them on the next boot up.

2. RAM has a much high frequency than hard disk, which makes it ideal to store temporary files so the CPU can process them faster without having to wait for the slow hard disk. With an initial empty state, the system will be able to load as many temporary files as possible onto the RAM without slowing down the whole loading process.


Yes, and no. Are you overclocking your computer? Then it matters a lot. Chances are you wouldn’t be asking this question if you were an experienced overclocker, though. If you’re just using your computer “out of the box,” then RAM speed doesn’t matter as much. The clock speed for RAM depends ultimately on the CPU controller’s clock. This is multiplied accordingly and, ta-da! Your RAM follows suit.

Back in the (terrible) days of SDRAM, speed really mattered. It was the difference between getting a program to open in 20 seconds or 10. But the whole CPU controller clock thing still applied, and you needed new hardware to take advantage of the top speeds. In today’s era, speeds don’t really jump as much as new RAM is released, so you’re in the clear as long as you’re not going to torture your computer by overclocking it.

As mentioned earlier, the RAM is the buffer for the hard disk as it stores temporary files for the CPU to process. If you only have a small amount of RAM, once it is full, the computer starts spilling its contents onto the hard drive (in a special location known as the page file/virtual memory/swap), which is a much slower piece of hardware. Once virtual memory fills up, your computer slows to a crawl. Your mouse might even fail to move or your clicks will receive no response.

When you increase the amount of RAM in your system, you are increasing the buffer for your hard disk so your content won’t be spilled onto the virtual memory.

Capacity, material quality, quality control, etc. There’s still a lot more to RAM than just clock speed. It’s like asking what makes one brand of window better than the other. If you’re going to buy RAM, get it from a company that will stake its entire reputation on one stick you buy. In other words, make sure the company is performing quality control and testing every stick they can. Durable materials are better for overclocking and other forms of abuse. If you’re going for a good stick, just get one that has a good warranty and doesn’t cost too much.

Break open an old chip and look at the interior. It looks a lot like glass, doesn’t it?


RAM is made, like many other chips, from silicon. The process of melting sand into silicon, coating it, and pressing it into a particular shape differs with each company. Not all companies make their own chips. Some of them buy these chips from others, which makes the purchasing decision more difficult. To get a durable chip, you’ll have to rely on a trusted brand. Reading reviews helps, but you’ll never really know what kind of process the company is using.

There are so many questions you may still have about RAM, it’s impossible to cover them all. As long as you’re curious, why don’t you leave a comment with your unanswered question? We answer quite quickly around here!

The post MTE Explains: 4 Questions About RAM Finally Answered appeared first on Make Tech Easier.

]]> 6
The Differences Between MBR and GPT Fri, 13 Sep 2013 14:50:07 +0000 You probably are wondering, what are the differences between MBR and GPT and is there any benefit using one over the other? We wil clear your doubt in this article.

The post The Differences Between MBR and GPT appeared first on Make Tech Easier.

differences between MBR and GPTIf you have dabbled with your hard disk and is always doing formatting and partitioning, you will surely come across the term “MBR” and “GPT”. This is especially evident when you are dual-booting your Mac and faced with the problem of having to switch from GPT to MBR. You probably are wondering, what are the differences between MBR and GPT and is there any benefit using one over the other? We wil clear your doubt in this article.

You probably know that you can split your hard disk into several partitions. The question is, how does the OS know the partition structure of the hard disk? That information has to come from some where. This is where MBR (Master Boot Record) and GPT (Guid Partition Table) come into play. While both are architecturally different, both play the same role in governing and provide information for the partitions in the hard disk.

MBR is the old standard for managing the partition in the hard disk, and it is still being used extensively by many people. The MBR resides at the very beginning of the hard disk and it holds the information on how the logical partitions are organized in the storage device. In addition, the MBR also contains executable code that can scan the partitions for the active OS and load up the boot up code/procedure for the OS.

For a MBR disk, you can only have four primary partitions. To create more partitions, you can set the fourth partition as the extended partition and you will be able to create more sub-partitions (or logical drives) within it. As MBR uses 32-bit to record the partition, each partition can only go up to a maximum of 2TB in size. This is how a typical MBR disk layout looks like:


There are several pitfalls with MBR. First of all, you can only have 4 partitions in the hard disk and each partition is limited to only 2TB in size. This is not going to work well with hard disk of big storage space, say 100TB. Secondly, the MBR is the only place that holds the partition information. If it ever get corrupted (and yes, it can get corrupted very easily), the entire hard disk is unreadable.

GPT is the latest standard for laying out the partitions of a hard disk. It makes use of globally unique identifiers (GUID) to define the partition and it is part of the UEFI standard. This means that on a UEFI-based system (which is required for Windows 8 Secure Boot feature), it is a must to use GPT. With GPT, you can create theoretically unlimited partitions on the hard disk, even though it is generally restricted to 128 partitions by most OSes. Unlike MBR that limits each partition to only 2TB in size, each partition in GPT can hold up to 2^64 blocks in length (as it is using 64-bit), which is equivalent to 9.44ZB for a 512-byte block (1 ZB is 1 billion terabytes). In Microsoft Windows, that size is limited to 256TB.


From the GPT Table Scheme diagram above, you can see that there is a primary GPT at the beginning of the hard disk and a secondary GPT at the end. This is what makes GPT more useful than MBR. GPT stores a backup header and partition table at the end of the disk so it can be recovered if the primary tables are corrupted. It also carry out CRC32 checksums to detect errors and corruption of the header and partition table.

You can also see that there is a protective MBR at the first sector of the hard disk. Such hybrid setup is to allow a BIOS-based system to boot from a GPT disk using a boot loader stored in the protective MBR’s code area. In addition, it protects the GPT disk from damage by GPT-unaware disk utilties.

OS Support

Intel Macs are using GPT by default and you won’t be able to install Mac OS X (without tweaks and hacks) on a MBR system. Mac OS X will run on MBR disk though, it is just that you won’t be able to install on it.

Most Linux kernels come with support for GPT. Unless you are compiling your own kernel and you didn’t add this feature in, you should have no problem getting your favorite distro to work in GPT disk. One thing to note, you wil have to use Grub 2 as the bootloader.

For Windows, only the 64-bit version of Windows from XP onward support booting from GPT disk. If you are getting a laptop pre-installed with 64-bit Windows 8, most probably it is using GPT. For Windows 7 and earlier version, the default configuration will be MBR instead of GPT.

In most cases, you will be fine with either MBR or GPT. It is only in situation where you need to install Windows on a Mac, or when you need to have a partition bigger than 2TB, that you need to use GPT, or convert MBR to GPT. Also, for the newer model of computer that uses UEFI, it will only support GPT.

If you have any question, feel free to ask in the comments below and we will be around to answer your question.

The post The Differences Between MBR and GPT appeared first on Make Tech Easier.

]]> 7
3 Things You Wanted To Know But Never Asked About MAC Addresses Fri, 06 Sep 2013 23:25:09 +0000 If you've ever looked at the details on your network card, you might have noticed an awkward-looking series of alphanumeric characters. As you gaze upon this sequence of characters, a question may pop up in your head: "What in the name of Torvalds is a MAC address?!" And the answer is here.

The post 3 Things You Wanted To Know But Never Asked About MAC Addresses appeared first on Make Tech Easier.

mac addresses - nic address thumbIf you’ve ever looked at the details on your network card, you might have noticed an awkward-looking series of alphanumeric characters, separated by either semicolons or a space in a two-by-two fashion. Next to it is either the label “MAC,” “MAC Address,” or “Hardware Address.” As you gaze upon this sequence of characters, a question may pop up in your head: “What in the name of Torvalds is a MAC address?!” And the answer, fellow travelers, is below.

“MAC” is short for “media access control,” and it is used when talking about unique addresses that identify the hardware we use to connect to the internet. In most people’s cases, this is a network interface controller (NIC).

mac addresses nic

But not everyone uses an NIC to connect to the internet. Some people use internal WAN adapters to make wireless connections (through Wi-Fi). These, too, have MAC addresses. Basically, anything connected to the web by any means is identified with a MAC address. And much like your passport number, it’s unique (at least in your own personal network). At the time each device or network card is manufactured, the manufacturer brands the hardware with the address, which is stored in an on-board chip.

That’s the big question, isn’t it? A MAC address sounds like a superfluous thing to have the moment you already have IP addresses. And, in many cases, you are right. However, your home and/or business networks use MAC addresses to communicate internally. This is sometimes superseded by IP communications, but you’re relatively relying on MAC addresses when communicating through the Ethernet layer. It may not always be the case, as technological advances are making MAC-based communication look silly.

For some internet service providers (ISPs), MAC addresses present a really easy way to authenticate computers to give them access to the internet. These ISPs don’t require fancy modems, but do require that your computer’s network card (or your router) have a certain MAC address to gain access to the internet. This was the case with an ISP I used 2 years ago, and it tied a static IP permanently to my computer’s MAC address. There are some disadvantages to having a static IP if you’re a home user, but it works very harmoniously when you host four websites and don’t want to reconfigure everything every time your IP changes.

In short, MAC addresses are chiefly used for two things: internal network communication and external network authentication (on some ISPs).

Although it really isn’t necessary – in most cases – to change your MAC address, most modern network cards and routers have this feature. Their default MAC addresses, however, are often inscribed on them.

mac addresses - nic address

In Windows 7, the process of changing your MAC address is relatively easy:

  • Click on your “Start” menu and then click on “Network.”
  • Click “Network and Sharing Center” near the top of the window.
  • Click “Change adapter settings” on the left-hand side.
  • Right-click on the network adapter you’d like to change the MAC address of and click “Properties.”
  • Click the “Configure…” button.
  • Click the “Advanced” tab.
  • Select “Network Address.”

A text box will show up on the right-hand side of the dialog, labeled “Value.” Select the radio button next to it and type in whatever you want. When you select “Not Present,” you revert the value to the default manufacturer-defined address.

If you still have questions about your MAC address, leave a comment with your question. We have lots of smart readers and authors here who are always handy with these topics!

The post 3 Things You Wanted To Know But Never Asked About MAC Addresses appeared first on Make Tech Easier.

]]> 10
RAM Optimization in Windows Is Not Required. Here is Why Mon, 02 Sep 2013 23:25:41 +0000 If you are using a RAM optimization tool, it is time to stop now. Windows comes with a good memory management function and using a RAM optimization tool can in fact make your PC run slower.

The post RAM Optimization in Windows Is Not Required. Here is Why appeared first on Make Tech Easier.

ramoptimizer-thumbFor some people, RAM is a precious resource that must be managed carefully. This is especially true for those with systems that have only the minimum amount of RAM necessary to run their versions of Windows. It’s also a reason why many people intuitively choose to download and install programs that will manage this RAM and shave off as much of it as possible from programs to make the computer run faster. Assuming you have Windows Vista or any other version later than Windows XP, I’m about to explain to you a common misconception about memory and how it’s optimized in your operating system.

Perhaps you’ve noticed a little bit of a bump in speed when using a memory optimizer in Windows. Or maybe it was all psychological auto-suggestion. Who knows? Well, let’s have a look at what goes on under the hood, shall we?

In Windows XP and earlier versions, if your RAM was almost full, you would experience a huge drop in performance, which required that you use some sort of tool to get rid of all the nasty stuff occupying your computer’s resources. In fact, you were limited to 4 GB unless you used a 64-bit OS (which wasn’t available before XP). This still rings true, but it was more critical back in the day when 64-bit operating systems were just starting to make their mark on the computing fray. RAM optimization was, quite frankly, a must.


After Windows Vista came along, RAM optimizers were still commonplace. Vista was Windows 7′s deaf and crippled cousin, eating up resources like a frat party. People were alarmed at how much RAM was being allocated out of the blue, and the only way to push back was with these tools. Unfortunately, they did this to no avail. This wasn’t really the problem. Vista was just lousy, that’s all. RAM occupation had little to do with your problems, and when it did, it was because of a memory leak sprung out of control. The RAM optimization tool became the world’s most popular do-nothing gimmick.

You see, Vista and later versions of Windows started fetching memory from programs, pre-loading them into an addressing space before you ever started them. With predictive technology (known as “Prefetch” or “Superfetch”), it took the programs you used the most and slapped them onto your RAM without you ever knowing about it. RAM optimization will just get rid of this cached memory, which really has no effect on your performance, since Windows would have done this already once it determined that your memory’s full. It does this in two ways: It either tells Windows to force all its running programs to use the page file (which is way slower), or it gulps down the rest of your RAM just so Windows flushes out the cache and then shrinks back to its original size.

ram-optimizer-mem optimizer pro

Just like SSD optimization, RAM optimization is simply redundant. And just like SSD optimization, it could also be counter-productive, because…

You read that correctly. RAM optimization can actually create more problems than it solves. Allow me to explain: Your most frequently used programs actually need to load into memory before you see them on your screen. The delay between one thing and the other is what causes that annoyance in which you have to wait for them. Of course, your hard drive also helps slow this down even further. But right now, we’re focusing on RAM memory, not physical storage.

If you run optimization, you simply delete the cached RAM with no effect on what you’re actually using. In Windows Vista and above, full RAM is good RAM. It’s RAM put to good use. If you’re using Windows 7, you can actually see the caching process at work:

ram optimizer-physical memory

That’s my computer’s physical memory details in the task manager’s “Performance” tab. Notice how I only have 5 MB of free ram. By the way, Windows 7 and newer versions show you your real RAM usage. Vista did not do this, which led to a lot of confusion about why most of the RAM is in use. If I use a RAM optimization tool right now, programs like Chrome won’t load as quickly.

If your computer is working slowly and occupying tons of RAM with actively running applications (in other words, it has no room for caching anything), then go to your local computer hardware store and get yourself a RAM upgrade. Optimization will simply do nothing (or worse, it will just tire out your OS). Let us know what you think by typing up a comment!

Image credit: Hand Holding Ddr Memory from BigStockPhoto

The post RAM Optimization in Windows Is Not Required. Here is Why appeared first on Make Tech Easier.

]]> 3
How to Enable TRIM For SSD in Ubuntu Tue, 27 Aug 2013 14:50:34 +0000 Windows 8 comes with an Optimize Drive feature that can send TRIM command to SSD. What about Ubuntu? How can you enable TRIM for SSD in Ubuntu? Here's how.

The post How to Enable TRIM For SSD in Ubuntu appeared first on Make Tech Easier.

ssd-trim-thumbIf you are using a Solid State Drive (SSD), you should know that you shouldn’t run any defragmentation or free space consolidation software on it. So how do you clean up your SSD and free up the empty space? TRIM is the command we use to inform the OS to do the cleaning job. Windows 8 comes with the “Optimize Drive” feature that can run the TRIM command regularly. What about Ubuntu? How can you enable TRIM for SSD in Ubuntu?

Note: The following steps will not work if you have encrypted your partition.

1. First, we have to make sure that the SSD in your computer supports TRIM. In Ubuntu, open a terminal and type:

sudo hdparm -I /dev/sda

If Ubuntu is not installed in the first partition of the SSD, change “sda” to reflect the partition where Ubuntu is residing. Scroll down the list till you see the “Enabled Supported:” section. Scroll down further and if you see something like “Data Set Management TRIM supported (limit 4 blocks)“, then TRIM is supported for your SSD.


2. Next, we need to test if the TRIM function is working in Ubuntu. In the terminal, type:

sudo fstrim -v /

This will clean up the root partition of the SSD. If successful, you should see something like this:


3. Lastly, we will set a cron job for the OS to send the TRIM command once everyday.

sudo nano /etc/cron.daily/trim

Paste the following code into the blank area:

fstrim -v /

If your HOME directory in located on another partition, you can add additional line to the end of the above code:

fstrim -v /home >> $LOG

If you want to save the output to a log file, you can use the following code instead:

echo "*** $(date -R) ***" >> $LOG
fstrim -v / >> $LOG
fstrim -v /home >> $LOG

Save (Ctrl + o) and exit (Ctrl + x).

Now, make the cron job executable:

sudo chmod a+x /etc/cron.daily/trim

That’s it.

Image credit: My SSD

The post How to Enable TRIM For SSD in Ubuntu appeared first on Make Tech Easier.

]]> 4
3 SSD Optimization Techniques That Are Useless or Harmful Mon, 26 Aug 2013 23:25:45 +0000 Most people think that they can use the same hard drive optimization techniques on a SSD, and achieve even better result. This couldn't be farther from the truth. Here are several SSD optimization techniques you should avoid at all costs.

The post 3 SSD Optimization Techniques That Are Useless or Harmful appeared first on Make Tech Easier.

ssd-optimization-thumbI said it: SSD optimization is complete bonkers. There are many reasons why, but it all boils down to the mechanisms within your drive. The average consumer looks at a solid-state drive (SSD) and sees only a faster version of the grand old hard disk drive (HDD) that has served us for decades. This is why they download software for optimization. They think that if it works on an HDD, an SSD should work better because of it. However, this couldn’t be farther from the truth, and it has almost everything to do with the way an SSD’s mechanisms work differently from those of an HDD.


SSD optimization has caught a lot of hype among users that don’t really understand how the drives work. Since they use flash memory, there’s a limited amount of times that data can be written to a particular cell before it expires, meaning it’s no longer usable. This is called “write endurance.”

Limitations on this type of flash means that you have to be as conservative as possible with what data gets written onto the drive. That’s why defragmentation utilities are off limits. You read this correctly: Do not, by any means, defragment an SSD. HDDs have platters that constantly revolve. Read/write heads have to reach out and seek portions of each file, piece them together, and commit them to memory (RAM). This process is excruciating and puts strain on the little mechanical marvel. However, this isn’t the case with an SSD. Your average SSD has the ability to pull up all the pieces almost instantly, since it doesn’t have to seek through metallic discs.

Defragmentation takes split (fragmented) files and pieces them together into a whole entity. That’s all it does. On an SSD, this is useless and even harmful, since it writes data into cells constantly during the process. The more strain you put on the drive by writing to it, the earlier it will go bust. Just don’t defragment or write too much to it.

Some tools come with a free space consolidation feature. When your SSD writes data, it uses cells to store everything. Each cell has a certain number of bytes in it. Once a cell has even one bit of data, it’s declared occupied. So, you theoretically can have an almost-empty cell that’s declared full because of a single byte of data. To solve this problem, you can consolidate free space by piecing data fragments together, some of them sharing cells with one another. That’s how free space consolidation works. Unfortunately, it’s a waste of time because your SSD already does this with its on-board controller. Some OS also included it in the kernel level. Free space consolidators don’t have access to the controller – only to the operating system. There’s nothing you can really do on the software side to really map out your drive correctly.


In order to completely delete a file in the normal HDD, you will need to overwrite the data repeatedly. Imagine what kind of damage you will do to the SSD if you use this method over and over again?

For SSDs, there’s no such thing as a physical overwrite. Instead, operating systems from Windows 7 send a special command to an SSD after you delete a file. It tells the SSD to actually erase the data from its physical location. You don’t need to use special erasing tool. They are basically doing the same thing that Windows 7 is already doing.

You paid a hefty price for your SSD. Don’t throw money away by using programs that perform functions that are either harmful or unnecessary. Leave a comment below if you feel like there’s more to this discussion!

Image credit: 6 SSD Questions, Intel X25-V SATA SSD

The post 3 SSD Optimization Techniques That Are Useless or Harmful appeared first on Make Tech Easier.

]]> 4
How To Stream Local Media to Chromecast Wed, 21 Aug 2013 23:25:19 +0000 Google did not advertise the Chromecast to be able to stream local files from your PC, but that doesn't mean it can't do it. Here is a guide to stream local media to Chromecast.

The post How To Stream Local Media to Chromecast appeared first on Make Tech Easier.

ChromecastLocal-thumbThe Chromecast is a wonderful device, a $35 dongle that supports streaming YouTube, Netflix, Play Movies, and Play Music. We’ve covered it before, listing six ways to use the device that may not be immediately apparent. This article fleshes out one of those use cases. Here is a guide to stream local media to Chromecast. It’s not a polished experience, and audio works much better than video, but it’s a surprisingly easy process.

Streaming local files may not be an advertised feature of the Chromecast, but it doesn’t require the installation of any third-party software to get up and running. To get started, just head over the Chrome Web Store and install the Google Chrome Chromecast extension.


This extension will place the Chromecast icon near the full-screen button found in YouTube and Netflix videos. It also enables you to mirror tabs to your television. This is currently a beta feature, and it’s kind of laggy, but it’s still nice functionality to have in addition to the Chromecast’s core features.


This tab casting feature, however, also supports local media playback. For Windows users, this process is as easy as selecting a file in your file manager and dragging it into the location bar. Alternatively, if you know the path already, you can type it in yourself. Either way, the file will start playing immediately.


Ironically, doing this is more different on Google’s own desktop operating system. Dragging and dropping is not supported on Chrome OS, but it’s still possible to navigate to the file directly. The path to your downloads folder is file:///home/chronos/user/Downloads/. Append the name of your file to the end of that path and paste it into the location bar. It should then start immediately, just as it does on other platforms.


What about music? Not a problem. Streaming audio files is just as easy, and since the files are typically smaller, the results are more satisfactory. Audio playback should be much more stable than trying to stream video.


Video playback may be awful and is likely to look very pixelated. One way to “improve” this situation is the lower the tab projection quality. You can do this by clicking on the Chromecast icon and finding the setting under options. Chrome defaults to 720p, but you can drop it down to 480p standard definition.


Unfortunately, this may not completely take care of your problem. There’s a reason Google lists tab casting as a beta product for the time being.

If you’re using Windows, you can also choose to cast your entire screen. It’s sort of the buckshot approach. If you’re streaming your entire screen, then you’re also streaming the local files that are on it. Just bear in mind that this feature isn’t even beta – it’s experimental.


The options for streaming local video files are limited now, but this is likely to change in the days and weeks ahead. One developer is already experimenting with an Android app that can stream content from your device’s gallery, Dropbox, or Google Drive. We’ll keep an eye out to see what innovations pop up. If you discover anything in the meantime, feel free share it with us and others in the comments below.

The post How To Stream Local Media to Chromecast appeared first on Make Tech Easier.

]]> 0
Using the Chromecast with Android, iOS and Chrome Tue, 20 Aug 2013 23:25:35 +0000 Google released its latest assault on the living room in the form of a tiny HDMI dongle, a device capable of streaming media from your computer, smartphone and tablet. Here is how to get the Chromecast up  and running in your home.

The post Using the Chromecast with Android, iOS and Chrome appeared first on Make Tech Easier.

chrome-logoGoogle released its latest assault on the living room in the form of a tiny HDMI dongle, a device capable of streaming media from your computer, smartphone and tablet. It doesn’t work by itself though. It has to be paired with the software on your computer in order for you to send the media from your computer to the device. The good thing is that the software is free for all users. Here is how to get the Chromecast up  and running in your home.

Customers can use the tiny $35 device, which plugs into any available HDMI port on your TV to “cast” media from the Google Chrome web browser to the TV. Netflix, Google Play and YouTube are already on board, as well as several others, but most online video should work.

You will also need the Chromecast extension, which can be found free in the Chrome Web Store. This will enable almost any streaming video to be sent from the browser to the TV.


In addition, customers can send home media to the device by using a slight trick in Chrome, though it must be a video or audio file type and the browser is capable of playing.

  1. Open a new tab
  2. Press “Ctrl + o”
  3. Open a file (video or audio) that Chrome can play
  4. Click the “Cast” button

You mobile devices require another step. Head for the Google Play store or iTunes App Store and grab the app provided by the search giant to get underway. Android users will need to be running version 2.3 or newer of the mobile operating system to be compatible – but that should cover (almost) every device currently in use. The app is also free.

Once downloaded and installed, you can start it up for the first time.There is the usual set of privacy and terms to agree to, but there is nothing out of the ordinary contained within them.


The app will then begin searching for your Chromecast device – you will, of course, need to be connected to the same WiFi network as the Chromecast for this to all work. I initially had some issues finding the device on my network, and had to try a couple of times. However, with that minor detail out of the way, it was smooth sailing. The app does warn that your WiFi network will be temporarily down as this connection and setup occurs.

You will initially find Netflix, YouTube and Google Play movies and TV listed across the bottom of the app, but you can also download more services from the Play store to add to this. Simply tap your selection to send it to the big screen. All controls are handled from your device, which will act as a remote control.

The Chromecast holds a lot of promise in the future for how we consume media. The price is certainly right and the operation is smooth. It is only now a matter of waiting for additional services to find their way aboard this platform. Rest assured, the traditional media should be scared of this.

The post Using the Chromecast with Android, iOS and Chrome appeared first on Make Tech Easier.

]]> 1
Things About Mobile Device Batteries You Probably Didn’t Know Fri, 09 Aug 2013 23:25:23 +0000 Most people expect their mobile batteries to last long hours, but the fact is they don't. And the worst thing is that people are taking wrong advices to "take care" of their batteries. Here are some facts that you really need to know about battery care.

The post Things About Mobile Device Batteries You Probably Didn’t Know appeared first on Make Tech Easier.

batterycare-bunch-thumbIt’s sufficient to say that a phone is nothing without its battery. This is why so many portable electronics users look for guidance in how to make their batteries perform better and live longer. However, when you look to the internet, you’ll find either misinformation or advice that omits crucial details and repeats regurgitated advice that you probably already knew about. In this day and age, we probably have very high expectations for our batteries, despite the fact that batteries haven’t advanced nearly as quickly as mobile technology has. As a consequence, device batteries are very pretentious beasts. Let’s talk about what you really need to know when it comes to battery care.

At least it’s a myth nowadays. This myth started back when portable electronic devices were using nickel-cadmium (NiCad) batteries. NiCad suffered from the incapacity to fully understand its own charge. If you charged your electronic device before it discharged completely, this would backfire on you and create what is known as the “memory effect.” This effect essentially cuts the device’s battery life significantly unless it is discharged once in awhile. Oh, and you don’t have to worry about that anymore.

In the modern mobile era, we no longer use NiCad batteries. Instead, we use lithium-ion (Li-Ion). You may be aware of this, but you’re probably not aware of the fact that completely discharging a Li-Ion battery may damage it. Each time you drain the battery, it risks going below the 3.3-volt mark. Battery charge is determined by voltage, and most Li-Ion batteries operate between 3.3 (empty) and 4.2 volts (fully charged). If you fall below or rise above this range, the battery will overcompensate the shock and lose a little bit of its charge capacity (measured in milliamp hours).

Once your battery reaches 30-50 percent charge, just stick it in the charger and that’s the end of it!


If you’re letting your mobile device sit in your car unattended in the middle of the summer, it will lose some of its capacity permanently. The battery’s chemicals are stuck inside a tube that doesn’t give them a lot of wiggle room. Batteries get exhausted just like humans do from heat. Although some people may tell you to avoid putting your battery in cold temperatures, it’s really no big deal. In fact, storing an unused battery in freezing temperatures might help it retain more of its charge than it would at room temperature.


You should never drain your Li-Ion battery. End of story. However, you might think that it’s fine to store a fully-charged battery for a prolonged period of time. The more, the better, right?

Wrong. Batteries are more chemically active and store more potential energy when fully-charged. The more juice they have, the more likely they are to crystallize their internal chemicals over the long run. This results in a permanent loss of capacity. You should always store batteries at a maximum of 70% capacity. Ideally, you should store them at 50%.

Batteries remain some of the most enigmatic things that appear in electronics, so it’s no surprise that some people just don’t “get it.” Leave a comment below to let other fellow readers know important information about their batteries!

The post Things About Mobile Device Batteries You Probably Didn’t Know appeared first on Make Tech Easier.

]]> 8
How to Troubleshoot a Router (And Find Out If You Need a New Router) Thu, 01 Aug 2013 23:25:40 +0000 It’s easy to blame ISPs when the Internet goes down, but it could also happen due to a faulty router. Here is a checklist for you to troubleshoot a router.

The post How to Troubleshoot a Router (And Find Out If You Need a New Router) appeared first on Make Tech Easier.

TroubleshootRouter-thumbIt’s easy to blame Internet Service Providers (ISPs) when the Internet goes down, but it isn’t always that simple. Sometimes hardware fails and that includes your router. Your router won’t let you know something’s wrong, you have to troubleshoot it to find out if you need a new one. Use the following points as a checklist to troubleshoot a router.

Assumption: This article assumes that you have done the troubleshooting on your computer and are sure that the Internet issue is not due to a software misconfiguration.

Your first step to troubleshoot a router is to check the power. Depending on your setup, it’s possible that the power source has been turned off or the power plug has come loose. While you’re checking the power cords, check the rest of your cables, too. You never know when a cable might come out just enough to cause a service interruption.

After checking to see if your power cable, and other wires, are connected snugly, you’ll want to check your Internet signal. Disconnect from the router entirely and plug a PC directly into the source of your Internet. This can be a modem or a wall jack. This will immediately tell you if there’s a problem with your router or your Internet signal.

If you are using a wireless router, disable the wireless connection on your computer and connect it directly to the LAN port of the router. If it is working, then the fault could lie in the wireless configuration of the router. If it is not working, then the router could be faulty.

If you are not able to get a wireless connection, we’ll want to talk a look at your router settings to see if something might be set up improperly. If you recently changed your router settings, reverse what you did and see if that changes anything. It’s possible your router didn’t like the change.


If your router doesn’t utilize dual-band wireless, household items – like cordless phones, garage door openers and anything that operates on the same wireless band – can interfere with your signal. If everyone in your neighborhood has a wireless router in particular, this can cause conflicts as the signals are trying to bounce from one location to the next.

From your router’s settings, you’ll be able to change the channel for your wireless connection. Change the channel, then cycle your home network, and see if that solves the issue.


If changing channels and cycling your network doesn’t help, check the manufacturer’s web site for your router and see if a firmware update is available. Most Internet users have no clue the router can be updated and these updates can actually make an incredible difference in performance, especially on older router models.

The update process varies from manufacturer to manufacturer, but there should be a place in the router settings that let you upgrade it. It will either check for an automatic update or let you manually choose a file to start the upgrade. Follow the instructions for your particular router model.

Once finished, cycle your home network, and see if it makes a difference.

If your router doesn’t support 802.11n, which is considered the most current in WiFi technology, chances are your router is too old to continue functioning properly. If you notice dead spots in your home or office, it may be part of the “n” technology of your router. The “N” technology refers to MIMO, or Multiple Input Multiple Output, this means that often signals in your home or office are bounced off walls before they reach their destination.

Dead spots occur when a signal has reached its maximum amount of bounces and the signal drops off. There isn’t much you can do to combat dead spots, other than move your router or go with a router not using “N” technology.


If you’re still having issues troubleshooting your router, the last step should be to reset it to factory settings. This will make it seem like it came right out of the box. You’ll need to reconfigure the router to your liking and this should be your last ditch effort to troubleshoot your router.

If none of the above help you troubleshoot your router, it’s probably time to buy a new one. Routers aren’t that expensive anymore, unless you want a high-power model. Buying a new router after troubleshooting can save you more time than continuing to deal with dropped signals, dead spots and other issues that come up when a router goes bad.

Image Credit: Flickr

The post How to Troubleshoot a Router (And Find Out If You Need a New Router) appeared first on Make Tech Easier.

]]> 0
6 Unconventional Uses For Google Chromecast Tue, 30 Jul 2013 14:50:31 +0000 Google Chromecast is a $35 affordable media streamer that you plug into your TV or monitor and stream YouTube, Netflix, and Google Play content from your computer. What sets Chromecast apart is that these uses are just the beginning. Here are six areas where a Chromecast could excel where others offerings have not.

The post 6 Unconventional Uses For Google Chromecast appeared first on Make Tech Easier.

chromecast-thumbGoogle’s Chromecast is a $35 affordable media streamer that you plug into your TV or monitor, and it’s no surprise that the device has garnered so much attention so soon after its release. It’s a highly versatile device. Yes, it can stream YouTube, Netflix, and Google Play content, but between consoles, Blu-Ray players, and smart TVs, you probably already have a means of doing most of this already. What sets Chromecast apart is that these uses are just the beginning. Here are six areas where a Chromecast could excel where others offerings have not.


Chromecast is great for the traveling business person who finds themselves shuffling from one hotel to another. Who cares what cable package is available when you can pop in a Chromecast and stream the same media you would have at home? As long as you have easy access to a WiFi network and a TV with an unhindered HDMI port, you have what you need to get up and running.


Workers have needed to present information on a larger screen for colleagues since the dawn of business meetings. Projectors get the job done, but they’re expensive and unwieldy. A Chromecast is substantially cheaper. Place a large TV or monitor in a conference room and plug one in the back. They’re cheap enough to place in every room in an office, and it’s easy for one device to take over the screen for another, perhaps speeding up team presentations.


Chromecasts don’t have parental controls built in, but a person needs access to a smartphone, tablet, or computer to make use of them. Parents with small children can turn a movie on using their smartphone and walk away knowing their children can’t accidentally change what’s on to something more adult.


Since Chromecasts are so portable, you can take one to anyone’s house and know exactly which movies and music you will be able to stream. Though it could get chaotic, it’s an easy way for everyone to share YouTube videos with one another, each person able to search and play from their own devices. If the party attendants aren’t particularly tech savvy, or they’ve had a little too much to drink, you may even be able to convince them that you’re a wizard. And since Chromecasts are so cheap, it won’t be the end of the world if things get out of hand and it somehow ends up broken.


Google hasn’t advertised this feature, but it’s possible to stream media saved locally from a computer to a Chromecast device. As long as your files are converted into the right format, you can take those files buried on your hard drive and watch them on a display that will make them pop. It may not be as smooth, but it sure is less expensive or time consuming they setting up your own home media server. And when you’re done, you’ll be able to take that remote in the picture above and put it away for good.


No, the Chromecast won’t clean your apartment for you, but it could be your ticket out of a messy entertainment cabinet. If you don’t really watch cable or purchase DVDs anymore, you can clear out all of the boxes from under your television, stick a Chromecast in the back, and let the power cable be the only remaining cord. Manage all of your media on your computer or in the cloud and only fire up the TV when you want a bigger display.

You probably already have a way to stream cloud media to your TV – maybe even two or three – but don’t let that stop you from giving the Chromecast a serious look. This device is flexible in a way previous media streamers are not. This can turn you into a media streaming warrior on the go, and it can save you a bunch of money in the process. If you already have a Chromecast, or you have ideas about how to make use of the device, share your experience and ideas with us in the comments below.

The post 6 Unconventional Uses For Google Chromecast appeared first on Make Tech Easier.

]]> 0
Why Are Solid-State Drives So Expensive? Fri, 19 Jul 2013 23:25:12 +0000 Solid-states drives are often praised for their speed, but to enjoy it, you often have to pay a high price for it. This article explains why solid-state drives are so expensive.

The post Why Are Solid-State Drives So Expensive? appeared first on Make Tech Easier.

solid-state-drivesEver since solid-state drives (SSDs) came out, the hype around them has been overwhelming. Media outlets were talking about how much faster they are than hard disk drives (HDDs) due to the lack of moving parts. In a way, they’re not wrong. But since SSDs went into the market, people have been asking themselves whether they’re worth their hefty price tags. More so, people have also been asking why these drives are 5-10 times more expensive than HDDs to begin with. There are multiple reasons for this, and I’ll explain why Are solid-state drives so expensive below.


Flash memory is a very widely-used concept. It’s in your USB drive, your memory cards for video game systems, and your phone. Negated AND (NAND) logic gate flash is special in the sense that it maintains storage without needing continuous electrical power. This is a requirement for SSDs, since there’s no residual energy running through them when you turn off your computer. There’s one problem with NAND: it generally has a finite number of write cycles, meaning that each transistor will wear out over time.

If your hard drive wears out its NAND transistors, you may end up with anything from slight malfunctions to serious catastrophic data loss! To mitigate this, SSD manufacturers make use of very sophisticated processes that would prolong the lives of their transistors. They still will die at some point, but not as soon as they have been known to. One of their techniques consists of including more transistors to compensate for the dead ones.

It’s difficult for manufacturers to get over the NAND transistor limitations, and they probably never will completely eliminate the issue. Writing to an SSD constantly will destroy it eventually. That’s why you should just store your operating system and core programs on it and keep everything else (documents, invoices, pictures, etc.) in a hard drive.


Aside from the whole NAND issue, the assembly process of an SSD is a highly complex process. The controller and firmware must both sit inside of a small space and then must be tested for hours for stability and compatibility with the computers they will be inserted into. This adds significantly to the cost of production.

The manufacturing cost is also the reason why their prices get progressively higher per GB for higher-storage units. The opposite is true for HDDs, which have little problem storing more memory within a small space due to its mechanical function.

While the demand for solid-state drives is increasing, as compared to the HDDs, it only occupies a very small market share. As more and more computer manufacturers include SSD as the default storage device in laptop and computer, we will definitely see a drop in the price in the future (in fact, the price has already dropped when you compare the price between now and a year back). But as of now, the price of SSDs remains high.

There’s good news, though. The rise of mobile devices creates a larger overall demand for solid-state storage. This creates significant incentive to make these technologies cheaper.

A combination of expensive raw materials, low market demand, and costly manufacturing processes make for the hefty prices of SSDs. As with all electronics, SSDs get cheaper as time passes, but the fight against the price is significantly challenging. Be sure to leave a comment below with your thoughts on SSD prices!

The post Why Are Solid-State Drives So Expensive? appeared first on Make Tech Easier.

]]> 6