Make Tech Easier » Linux Uncomplicating the complicated, making life easier Thu, 07 Nov 2013 06:45:24 +0000 en-US hourly 1 Run Automated Scripts Over SSH Wed, 06 Nov 2013 15:50:20 +0000 Running automated scripts over SSH can be a difficult task since it requires you to enter a password for every connection. Here is how you can overcome this problem.

The post Run Automated Scripts Over SSH appeared first on Make Tech Easier.

ssh-teaserWe’ve shown you how to use SSH to transfer files securely. But there’s a major issue with SSH’s default behaviour. You are only connected with the remote machine after you’ve manually entered the password which rules it out for any tasks you want to run unattended. Or does it?

Here’s a quick lowdown on how the OpenSSH CLI tools (scp and sftp) work so that you can better appreciate the issue. When you want to copy over files to or from the remote host, you can use scp which automatically initiates a SSH connection to the remote host. Every time you run a scp command it establishes a new connection to the remote. So if you have multiple scp commands you’d be entering the same password several times.

This is why you wouldn’t want to use scp in any scripts you want to run unattended. There’s also the fact that if you have multiple accounts on several machines on the network, you’d have trouble memorizing unique, strong passwords for each.

To overcome this problem, you need to switch OpenSSH’s default authentication mechanism to a key-based system.

By default OpenSSH only uses keys to authenticate a server’s identity the first time a client encounters a new remote machine:

$ ssh 
The authenticity of host ' (' can't be established. 
ECDSA key fingerprint is da:e8:a2:77:f4:e5:10:56:6d:d4:d2:dc:15:8e:91:22. 
Are you sure you want to continue connecting (yes/no)?

When you respond by typing “yes”, the remote host is added to the list of known hosts. So in addition to the server authenticating the client by asking for a password, the client also authenticates the server using a key.

Similarly, you too can get yourself a set of keys to prove your identity. OpenSSH uses a pair of keys to prove your identity and create a secure connection to a remote server. The private key is for-your-eyes-only and is used by your OpenSSH client to prove your identity to servers. Then there’s the public key which you’re supposed to keep under all your accounts on all the remote machines you want to SSH into.

To create a key, on your client enter:

$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/bodhi/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/bodhi/.ssh/id_rsa.
Your public key has been saved in /home/bodhi/.ssh/

Remember not to leave the passphrase empty and make note of the location where the keys are stored. The “id_rsa” file is readable only by your account, and its contents are encrypted with the passphrase you supplied during generation.

The next step is to copy the public key to the remote server. Assuming you wish to login into user “admin” on the remote machine called “”, you can move the keys with a single command:

$ ssh-copy-id -i ~/.ssh/'s password:

After you’ve provided the passphrase for your private key, the public key will be automatically placed in the correct location on the remote server, which by default is the “~/.ssh/authorized_keys” file.

When you now ssh into the remote machine, you’ll be prompted for your passphrase. The only advantage of using keys is that instead of authenticating you with a password that’s transmitted unencrypted, the remote server and your client establish your identity based on the keys.

Also you can now ssh into several remote boxes using the same passphrase, as long as these remote machines have your public key. So you don’t have to remember multiple passwords.

But you still can’t run scripts without being interrupted for passphrases.

Related: How to Enable Two-Factor Authentication for SSH Connection

OpenSSH bundles a tool called ssh-agent, that keeps your private keys in memory. Once an agent is running, instead of prompting you for passphrases, the SSH clients will interact with the agent.

You can start the agent with “ssh-agent /bin/bash“, assuming you are using the bash shell.

Any commands which require access to your OpenSSH private keys will be intercepted and answered by the agent.

When the agent runs, you need to equip it with your keys. This is done by invoking the “ssh-add” program that by default loads the keys from the default identity file (~/.ssh/id_rsa).

$ ssh-add 
Enter passphrase for /home/bodhi/.ssh/id_rsa: 
Identity added: /home/bodhi/.ssh/id_rsa (/home/bodhi/.ssh/id_rsa)

Now when you log into the remote computer with “ssh“, you’ll be allowed without entering the passphrase!

Similarly, scp and sftp will also be able to connect to the remote hosts without ever asking you for a passphrase. So you can now schedule and run scripts that manipulate files on a remote machine automatically.

Also now that you are using keys, it’s a good idea to disable authentication via passwords. To do this, edit the remote server’s config file (/etc/ssh/.sshd_config) and change the “PasswordAuthentication” parameter from “yes” to “no”. From now on, if anyone tries to connect to your SSH service who doesn’t have a public key on the server, they will be denied access without even seeing the login prompt.

Image credit: Karunakar Rayker

The post Run Automated Scripts Over SSH appeared first on Make Tech Easier.

]]> 0
Transfer Files Securely Using SCP in Linux Mon, 04 Nov 2013 15:50:08 +0000 The attraction of SSH is that the connection between the two machines is encrypted. You can also securely transfer files using the scp command.Here's the complete guide.

The post Transfer Files Securely Using SCP in Linux appeared first on Make Tech Easier.

scp-between-linux-computers-200pxThe most common way to get terminal access to a remote Linux machine is to use Secure Shell (SSH). To work, the Linux server needs to be running a SSH server (OpenSSH) and at the other end you need a SSH client, something like PuTTy in Windows or the ssh command line tool on Linux or other Unix-like operating systems such as FreeBSD.

The attraction of SSH is that the connection between the two machines is encrypted. This means that you can access the server from anywhere in the world safe in the knowledge that the connection is secure. However the real power of SSH is that the secure connection it provides can be used for more than just terminal access. Among its uses is the ability to copy files to and from a remote server.

To prepare the server, you need to install the openssh-server package. On Ubuntu, you can install it from the Ubuntu Software Center or using the command line:

sudo apt-get install openssh-server

Next, you need to discover the IP address of the server. On Ubuntu, the IP address is shown in the Network applet in System Settings or you can use the command line:


In the output, look for the line starting with inet under eth0. In this example the IP addres of the server is


To test the SSH connection, move to the Linux client machine and type:


Where is the IP address of the server. Enter your username and password when prompted and you will be connected to the remote machine. If you get a question about the “authenticity of host can’t be established” just answer “yes” to the question. It is a security check designed to make sure that you are connecting to your actual server and not an impostor.

Now that you have tested the SSH connection, you can start to copy files between the two machines. Secure copying is achieved using the scp. The basic format of the scp command is:

scp /filepath/to/file/to/copy user@IP-address:localpath

For example, to copy the file “” from the local machine to the “backups” folder in the home directory of user “gary” on the remote server with the IP address of, use:

scp gary@

Similar to when you connect using ssh, you will be prompted for the password. You won’t be prompted for the username as that was specified in the command.

You can also use wild cards like this:

scp *.zip gary@

To copy a file from the remote server to the local machine, just reverse the parameters:

scp gary@ .

Notice the dot at the end of the command which means “the current directory,” as it does with the standard cp or mv commands.

And the same with wild cards:

scp gary@*.zip .

To recursively copy a directory to a remote server use the -r option:

scp -r backups/ gary@

And to copy a recursively copy a directory from the remote server to the local machine use:

scp -r gary@ .

If you don’t want to place the incoming files in the current directory (note the dot at the end) then you can specify a different directory name:

scp -r gary@ backups-from-server/

scp is a powerful and yet convenient way to copy files to and from a server without have to setup FTP or other file sharing servers. It has the added bonus that it is secure (something that can’t be said for a default FTP installation). To progress further, try experimenting with the -C option, which enables compression during the copy or the -l option which limits the bandwidth during the copy.

The post Transfer Files Securely Using SCP in Linux appeared first on Make Tech Easier.

]]> 0
Date/Time Missing In the Menu Bar in Ubuntu 13.10? Here’s the Fix! Sun, 03 Nov 2013 22:25:38 +0000 If you upgraded to Ubuntu 13.10 and find the date time indicator missing in the menu bar, here's the fix.

The post Date/Time Missing In the Menu Bar in Ubuntu 13.10? Here’s the Fix! appeared first on Make Tech Easier.

If you have upgraded your Ubuntu machine to Ubuntu Saucy 13.10, one of the things that you might find missing is the date/time indicator in the menu bar. And if you visit the “System Settings -> Time and Date -> Clock” section, you might find that the option to add it to the menu bar is greyed out and disabled.


If you are having this problem, here’s the fix.

1. Reinstall indicator-datetime. It should be installed by default, but just in case you have removed it unknowingly, it is best to run the install command again.

sudo apt-get install indicator-datetime

2. Next, we are going to reconfigure the date time:

sudo dpkg-reconfigure --frontend noninteractive tzdata

3. Lastly, restart unity.

sudo killall unity-panel-service

That’s it. The date and time indicator will appear in the menu bar now.

The post Date/Time Missing In the Menu Bar in Ubuntu 13.10? Here’s the Fix! appeared first on Make Tech Easier.

]]> 0
The Ultimate Guide to Speed Up Your Linux Computer Fri, 01 Nov 2013 14:50:36 +0000 Everyone loves a speedy computer, regardless whether it is a new or old machine. Here is a compliation of the tricks we use to speed up our Linux computer.

The post The Ultimate Guide to Speed Up Your Linux Computer appeared first on Make Tech Easier.

streamline-teaserEveryone loves a speedy computer. No matter how fast your computer is already running, I am sure you are keen to make it run even faster, and smoother. Here is a compliation of the tricks we use to speed up our Linux computer.

The tips in this tutorial are suitable for speeding up both modern multi-core setups as well as older single core hardware that are low of resources. Also note that some of the tricks can be carried out with ease, while others require some familiarity with the Linux command-line.

Whether you have a dual boot setup or not, if you’ve installed a Linux distro, your boot will surely be interrupted by the GRUB bootloader. By default, most desktop Linux distros will display the GRUB bootloader for anywhere from 10 to 30 seconds. Do you know that you can trim the duration of the bootloader, or even skip the countdown completely?

Fire up a terminal and open the /etc/default/grub file in your favourite text editor.

sudo nano /etc/default/grub

Look for the GRUB_TIMEOUT variable. Replace the value associated with this variable to something like 5 or 3. Set it to 0 to disable the countdown (the first entry will be selected by default).

Save (Ctrl + o) and close the file (Ctrl + x). Then run


for the change to take effect,

One of the major reasons that contributes towards longer boot times is that your system starts unnecessary apps and services during startup.


The Ubuntu distro ships with the “Startup Applications” tool to add and remove any apps that the distro will launch on startup. Launch the app from the Dash and disable any of the services you find there.

To see all services (including those that are hidden or doesn’t come with a GUI), fire up a terminal:

cd /etc/xdg/autostart
$ sudo sed --in-place 's/NoDisplay=true/NoDisplay=false/g' *.desktop

Back to the “Startup Applications”, you should find additional startup programs such as “Personal File Sharing”, “Ubuntu One”, etc. You can read through their descriptions and disable any you don’t require.

To disable a service from starting, select a service and simply uncheck the checkbox next to its name. Do not click on the “Remove” button, otherwise you’ll have to add the service manually before you can re-enable it.

Instead of the good ol’ init daemon, Ubuntu and its derivatives are now using Upstart to manage services.

You can check the status of a service with the “status” command. For example,

sudo status cups

will mention the state of the service (“running” or “stopped”) and also prints its unique process ID. To disable a service, you need to create what’s called an override file. This file takes precedence over the original service file.

To illustrate, assume your distro lists the MySQL and Apache services as running although you don’t use them. To disable them, you need to create two override files, such as:

sudo sh -c "echo 'manual' > /etc/init/mysql.override"
sudo sh -c "echo 'manual' > /etc/init/apache2.override"

These commands instruct Upstart that these services will be started manually by the user when required. Note that these files are placed under the “/etc/init” directory and not “/etc/init.d”. When Upstart encounters the override files for these two services, it will ignore the instructions in the original service files.

Cutting down services and preventing apps from auto-starting does help but if you need drastic changes, you need to patch your kernel. For starters, there are several performance optimized kernel patches available on the Internet. None however is as popular as that made by Linux kernel developer Con Kolivas.

Kolivas’ patchset is built with an emphasis on desktop performance. After you’ve installed it, you’ll notice performance improvements in everyday desktop tasks, as well as while playing games and playing and producing multimedia.

Here’s a customised script (originally written by members of the Ubuntu Brazilian community) that will download vanilla kernels along with Kolivas’ patches and compile them into installable binaries.

Download the script:

cd /tmp
wget --no-check-certificate

Download the kernel and the patches and compile them:

time bash kernel-ck-ubuntu

The above step will take some time to complete. When it’s done you’ll have a bunch of binaries that you can install with:

sudo dpkg -i ./linux-*.deb

If you are impatient, you can also download pre-compiled binaries of the patched kernel for your architecture.

It might seem obvious but fancy graphical compositing effects are not suitable on slower machines and should be immediately turn off.

There are also some fancy features that we have taken for granted. For example, the thumbnails preview in the file manager. It wouldn’t make much difference when viewing the contents of a folder with a few files. But open a folder with dozens of files on a slow machine and the file manager will consume precious resources generating thumbnails.


To turn off thumbnails in Nautilus, head to “Files -> Preferences -> Preview” and set the value of “Show thumbnails” to “Never”.

Similarly the Nepomuk, Strigi and Akonadi features in the KDE desktop will hog memory resources. You can disable Nepomuk and Strigi from “System Settings” by heading to the “Desktop Search” section.


To disable Akonadi, shut down the running Akonadi server with

akonadictl stop

Now, edit the file /~.config/akonadi/akonadiserverrc and change the “StartServer” parameter from true to false.

Since Akonadi is tied deep into the KDE desktop, when you launch any Akonadi-enabled app, it will automatically start the Akonadi server. Some KRunner runners and Plasma widgets also use Akonadi so you have to disable them as well.

To disable Akonadi-enabled KRunner runners, press “Alt + F2″ and click on the “wrench” icon. Now uncheck “Nepomuk Desktop Search” and the “Instant Messaging Contacts” runners. Next you need to tell the Digital clock widget not to display calendar events by right clicking the digital clock in the panel and then heading to Digital Clock Settings. Switch to the “Calendar” tab and uncheck the “Display Events” option.

Thanks to the richness and variety of app in the open source universe, there’s no dearth of alternatives, including some designed especially for slower machines.

You can start by switching to a lightweight display manager such as XDM, instead of LightDM, GDM or KDM that comes with your distro. XDM isn’t as pretty as the others but it exerts miniscule demand on the hardware.

You can also switch to a lighter window manager like Xfce, Openbox, Enlightenment, etc.


Or, if you are really adventurous, you can go whole hog and switch to a lightweight distro, like Puppy Linux, Lubuntu, CrunchBang, etc. These distros put in quite an effort to make sure their offerings don’t tax your hardware. For example, Puppy Linux is loaded with lightweight custom apps of all sorts and the Lubuntu distro ships with the zram kernel module to improve its performance on machines with little RAM.

If you use any of these tips to speed up your computer, or have some of your own, do share your experience by adding a comment below.

Image credit: Caspar Diederik

The post The Ultimate Guide to Speed Up Your Linux Computer appeared first on Make Tech Easier.

]]> 5
Three top-rated music players for Ubuntu Thu, 31 Oct 2013 14:50:39 +0000 In the "Sounds & Video" category of Ubuntu Software Center, 3 of the top rated music players are Audacious, Clementine, and Qmmp. Let's check them out.

The post Three top-rated music players for Ubuntu appeared first on Make Tech Easier.

listen-internet-musicOne of the advantages of the Ubuntu Software Center is that it lets you see what applications are preferred by other Ubuntu users. In the “Sounds & Video” category, there are several different music players available. Three of the top rated programs (in alphabetical order) are Audacious, Clementine, and Qmmp. Each player takes a different approach to playing your music: Audacious and Qmmp are playlist based media players where as Clementine is a full blown media manager as well as media player.

All three programs can be downloaded and installed via the Ubuntu Software Center either from the “Sound & Video” category or by using the search box. Here is an overview of these three top rated music players.

Audacious is a simple media player with support for a wide range of audio formats including MP3, AAC, OGG, FLAC, WAV and WMA. It works using playlists which you create manually by adding files and directories to the list. The list can then be saved for later. If you just want to listen to one album then you would just quickly add those files to a new playlist and start listening. The advantage of playlists is that you can create your own lists for different genres, themes or moods. However there is quite a bit of initial work to create, populate and save those lists, but once all the hard work is done, you will have playlists for every disposition.


The biggest disappointment of Audacious is the initial GTK Interface. It looks out-dated and unrefined. When I initially saw it, I thought I must have downloaded the wrong program. However the interface can be changed by going to the View menu and selecting “Interface -> Winamp Classic” Interface. The GTK interface is then replaced with a sleek Winamp type look which you can tweak using standard Winamp skins.


The player also has an equalizer, a few visualization modes and support for plugins.

Clementine takes a very different approach than Audacious and Qmmp. It is a full media manager with a smart library (that automatically scans your music folders for new songs) and it also works with a variety of online services such as, Spotify, and SoundCloud.


After the initial install, go to Preferences -> Music Library and add a new folder, most likely your Music folder. Clementine will scan this folder and using the tag data in the audio files, it will build a smart library which allows each album and artist to be found alphabetically plus it will generate some smart playlists including: 50 random tracks, Favorite tracks, Never played and so on. There is also a built-in directory browser which shows you the media as you have them organized in your Music directory.

When playing a song, Clementine will query’s music database to get information about the song and it will also fetch information about the artist from Wikipedia. It also attempts to create a list of genres for the artist and recommends similar artists.

The search function will find music in your media library but it will also query the various online services to try and find the music. If you have premium accounts with services like Spotify and, this means you can stream songs directly from them. Without a premium account, you can still do things like Scrobble music on and so on.

Clementine also supports playlists, a cover manager, an equalizer and various visualization modes.

Clementine is also available for Mac and Windows.

Qmmp is similar to Audacious in many ways except that it uses Qt rather than GTK as the user interface toolkit. However since the UI mimics that of Winamp, any actual Qt specific styles are buried deep in the program and can only be seen when using the file open dialog or changing the preferences etc.


Qmmp is playlist based like Winamp and Audacious, so you need to create playlists in a very similar fashion to the others. In many respects, Audacious and Qmmp function identically because they are both based on Winamp. However Qmmp has more features. It supports more esoteric media formats like chiptune, Musepack and midi plus it has plugins for Scrobbing on, a lyrics plugin and cover plugin, to name a few.

Clementine is clearly the most sophisticated and feature rich media player of the three. Its full media manager and support for various online services places it above the other two. However Audacious and Qmmp are trying to be faithful clones of Winamp. If you like Winamp then Audacious and Qmmp might be a better fit for you. On Ubuntu it doesn’t really matter if you use a GTK or Qt based application, on other distributions you might be forced to choose one or the other depending on toolkit support. Between Audacious and Qmmp, the latter is probably better due to its extra functionality and the fact that Audacious’ bland GTK interface is an eye-sore until you switch to Winamp mode.

The post Three top-rated music players for Ubuntu appeared first on Make Tech Easier.

]]> 5
Forget Mail Clients, Send Email from the Command Line [Linux] Wed, 30 Oct 2013 23:25:30 +0000 If there is a need for you to send email from the command line, say to report the progress (or failure) of a backup process, you can easily do so in Linux. Check it out here.

The post Forget Mail Clients, Send Email from the Command Line [Linux] appeared first on Make Tech Easier.

Forget Mail Clients, Send Email from the Command Line Instead [Linux]Sending an email is something you often don’t have to think twice about. Simply fire up your email client, be it web or desktop-based, compose a message, enter the receipent’s email address and click “Send”. What if there is a need for you to send email from the command line, say to report the progress (or failure) of a backup process?

In Linux, sending emails from the terminal is really a piece of cake. You will need to setup a mail server though (Postfix or Sendmail). To make it easier, you can just install “mailutils” which will then install Postfix for you and allow you to send email using the “mail” command.

In Ubuntu (or Debian-based) distro, install mailutils with the command:

sudo apt-get install mailutils

It will then prompt you to configure Postfix (if it is not already installed).


mailutils configure postfix more steps

And the last thing to configure is the FQDN, which will then be used as the domain name in the “From” field.

mailutils configure postfix fqdn

Once you have installed “mailutils“, you can start to send email from the terminal using the following syntax:

mail -s "Subject" "recepient's email address" <<EOF
message here

For example, to send an email to “” with the subject “Send email from terminal”, the command to use is:

mail -s "Send email from terminal" <<EOF
Enter the email content here. You can write paragraphs of text here if you want. 


And this is what you will see in your email inbox:

mail received in gmail

Mutt is yet another text-based mail client that you can use to send emails from the Terminal. What makes it better than “mail” is that it comes with additional features like:

  • color support
  • message threading
  • MIME support (including RFC2047 support for encoded headers)
  • PGP/MIME (RFC2015)
  • POP3 and IMAP support
  • etc.

To install mutt, simply use the command:

sudo apt-get install mutt

To get started, run mutt in the terminal:


This will load your email “inbox”.

mutt inbox

Press “m” to compose a new email. It will prompt you to enter the recipent’s email address.


Next, it will prompt you to enter the Subject.


After that, it will open up a nano text editor where you can compose your message. Click “Ctrl + o” to save and “Ctrl + x” to exit.

Lastly, type “y” to send the email. You should see a “Mail sent” message.


Optionally, you can also attach a file to your email with the “a” keyboard shortcut, or type “c” to add a CC field.

To quit mutt, type “q”.

In addition to the “GUI” you see above, mutt can also be used in Bash script via the command line. To send an email using the mutt command:

 mutt -s "Subject" -a /path/to/file/attachment < /path/to/email/message.txt

Did you notice that how similar it is to the “mail” command?

Mutt works with a config file that you can use to pre-configure your mailbox’s detail. You can make use of muttrc builder to quickly generate a “.muttrc” file and save it to your Home folder.

Sending email from terminal is not a difficult task, and in some situations, it is a necessity. The good thing is that Linux comes with useful tool that you can use to send email from the terminal. We have covered mail and mutt, but they are not the only programs available. There are still several other applications that you can use to send email from terminal. Let us know which one is your preferred choice.

Image Credit: Tim Morgan

The post Forget Mail Clients, Send Email from the Command Line [Linux] appeared first on Make Tech Easier.

]]> 9
Easily Setup and Manage a Web Server with Zentyal Tue, 29 Oct 2013 14:50:55 +0000 Unlike a regular desktop distro, Zentyal is designed as an one-stop web server for small office and enterprises users. The distro has all the components you need to setup a server.

The post Easily Setup and Manage a Web Server with Zentyal appeared first on Make Tech Easier.

Zentyal-teaserUnlike a regular desktop distro, Zentyal is designed as an one-stop server for small office and enterprises users.

The distro has all the components you need to run a gateway server, an office server and a communication server. It’s got the Apache web server, OpenLDAP directory server, Bind DNS server, Jabbered2 IM Server, Zarafa groupware, Asterisk VoIP and DansGuardian for content-control management and a lot more.

The USP of the distro are its host of custom management tools to monitor and control the various components of the server. Although configuring network management services isn’t for the faint of heart, Zentyal goes the extra mile to help you configure the different servers and services without mucking about with configuration files.


Zentyal’s installer, like the distribution itself, is based on Ubuntu Server. The installation process is fairly straightforward and only prompts you for basic requirements like your location, keyboard layout, etc.

However, pay attention to the network-related settings that will help Zentyal hook up with your network. When prompted to select a network card, select the one that’s connected to the Internet and not your internal network. Also carefully enter the login credentials of the administrator user.

That’s about all the information Zentyal needs for installation. If you are wondering why there isn’t a partitioning step, it’s because Zentyal is designed to take over the entire disk.

Alternatively, you can also install the Zentyal components on top of an Ubuntu Server installation by adding and pulling packages from Zentyal’s repository.

When the distro boots for the first time, it will install some core packages by downloading them from the Internet, so make sure you are online when you boot the distro.

When it’s ready, you’re logged in Zentyal’s sparse desktop which has a browser window open that points to Zentyal’s web-based administration console. Log into the administration section with the credentials of the user you created during installation. On the initial launch, Zentyal will start a configuration wizard to help you setup the server.


From here you can install individual server packages, or modules in Zentyal’s parlance, or select predefined groups such as Gateway, Infrastructure, Office, and Communications. You can skip this step and install the packages later.

You can administer and monitor the Zentyal installation from its Dashboard. You can watch various server components, such as the CPU load, monitor bandwidth usage, and the status of all the installed modules. From here you can also install any available updates to the underlying core Zentyal distribution. The navigation bar on the left of the Dashboard will list the various installed modules, as you add them.


When you select a module or group of modules for installation, Zentyal will show you a list of additional dependency modules that need to be installed. Once the installation process has been completed, Zentyal will configure the new modules prompting you for the essential information required to configure the new modules.

For example, if you install the module for running an Active Directory server, Zentyal will ask you if you want to run the server in standalone mode or get directory information by connecting to an external directory server.

Another impressive feature is that before making any changes to the system, Zentyal gives you a full summary of changes. It will tell you what actions it’s going to perform (“generate SSL Certificates”) and the reasons for doing so (“Zentyal will self-sign SSL certificates for FTP service”) along with the complete path to the system files it’s going to modify.


If you get stuck, Zentyal has extensive DIY support options. It has a dedicated documentation website and also hosts a forum where users share tips and tricks based on the setup of their own networks.

Since Zentyal is based on Ubuntu Server, it will run on any hardware certified by the upstream distribution. However, unlike Ubuntu Server LTS releases which are produced once every 2 years, Zentyal puts out a stable release every September. All Zentyal releases are based on the latest Long term support release available at the time.

Also, besides the freely available Community Edition we’ve looked at in this review, Zentyal also offers two commercial editions – one for small businesses and the other for large enterprises. The major difference between the Community edition and the commercial editions is the lack of the Disaster Recovery backup service and technical support in the free edition.

Zentyal is a good business distro that’s easy to deploy, set up and manage. If you’re setting up a server for your network, I’d highly recommend you to check out its community edition. The distro is light enough to run inside a virtual machine, and once you are satisfied you can expose it to your network.

Image Credit: Richard Bowen

The post Easily Setup and Manage a Web Server with Zentyal appeared first on Make Tech Easier.

]]> 1
Managing Hard Disk Partitions Using fdisk [Linux] Mon, 28 Oct 2013 14:50:25 +0000 Most hard disks comes with several partitions. If you need to manage hard disk partitions in Linux, fdisk is the tool you need.

The post Managing Hard Disk Partitions Using fdisk [Linux] appeared first on Make Tech Easier.

backup-hard-drive-thumbEven the simplest, single hard drive installation of Linux where the whole disk is used for the OS probably has multiple partitions on the disk. If you need to work with the partitions on a disk, Linux provides several different tools including fdisk.

fdisk is a menu based, interactive command line tool that allows you to view, create, modify and delete partitions on a disk. In Linux, all devices are named according to special files found in the /dev directory. A typical SATA hard disk is named /dev/sda. To see a list of hard disks on your system use the “lshw” command:

sudo lshw -class disk

The output shows the hard drives and optical drives attached to the system:

Using lshw to list hard drives

To non-interactively list the partition table on the first hard drive, use:

sudo fdisk -l /dev/sda

The output will look something like this:


This shows that the first partition /dev/sda1 is the biggest partition and is a Linux partition. Since it is the only Linux partition, we also know that it is the root partition (or the system partition). sda2 is an extended partition (which can be subdivided into multiple logical partitions) and sda5 is the first (and only) logical partition in the extended partition. sda5 is used as swap space.

The second disk (/dev/sdb) on this test system is empty. To create a new partition run fdisk in its interactive mode:

sudo fdisk /dev/sdb

At the command prompt, type m to see the help menu or p to see the current partition list. To create a new primary partition, use the n command.

Enter p to create a primary partition then choose a partition number, in this case 1. Accept the default starting sector and then enter the size of the partition. On the test system, sdb is 100GB so I will create a 50GB partition by entering +50GB. Finally list the partitions using the p command. To save the partition table to the disk and exit, type w.

Create a new partition with fdisk

To delete a partition, use the d command. If the disk has multiple partitions, fdisk will ask which partition to delete, however if there is only one partition then fdisk will automatically delete it.

If you make a mistake at any point, use the q command to quit without saving. This will leave the hard disk in the same state as when you started fdisk.

Each partition needs to have a partition type. The partition type for Windows is different to the partition type for Linux and so on. There are also partition types for swap space and for older versions of Windows (before XP) using FAT rather than NTFS. Other Unix-like operating systems such as FreeBSD, OpenBSD or Mac OS X all have their own partition ids.

To see a list of partition types, use the l command. All the numbers listed are in hexadecimal, for example FreeBSD uses a5. Linux uses id 83 and Windows (from XP onwards) uses 7. If the partition is for use within your Linux installation, leave the partition type as the default 83, but if you want a partition that can be read by multiple operating systems including Windows then you should use either 7 or b.

To change the id on a partition, use the t command. You will be prompted for the partition number and then the partition code. If you have forgotten the code you wish to use, then you can type L, instead of entering a partition type, to see the list again. Once you have entered the partition code, use p to list the partitions and check that the partition type has been set as excepted.

Once a new partition has been created, it needs to be formatted. For partition types other than 83, it is best to format the partition using the relevant native operating system (i.e. Windows for id 7 etc). For Linux use the mkfs.ext3 or mkfs.ext4 commands for a typical partition:

sudo mkfs.ext4 /dev/sdb1

The filesytem then needs to be mounted using a command similar to this:

sudo mount /dev/sdb1 /home/gary/mediastore/

Where /home/gary/mediastore/ is the directory where you want the disk mounted. Finally the /etc/fstab file needs editing, for more information please read Getting to know your fstab.

fdisk is a versatile tool however make sure you backup your data before manipulating the partition table as mistakes can be costly. It is also worth noting that fdisk has some limitations, namely it does not understand GUID partition tables (GPTs) and it is not designed for large partitions. In these cases, use the parted tool.

The post Managing Hard Disk Partitions Using fdisk [Linux] appeared first on Make Tech Easier.

]]> 3
Emmabuntüs – A Distro Tailor-made For Refurbished Computers Fri, 25 Oct 2013 17:25:38 +0000 If you look past the minor peculiarities of Emmabuntüs, you’ll discover a well put together distro that’ll even appeal to users beyond its intended user base.

The post Emmabuntüs – A Distro Tailor-made For Refurbished Computers appeared first on Make Tech Easier.

emma-thumbnailJust like its idiosyncratic name, if you look past the minor peculiarities of Emmabuntüs, you’ll discover a well put together distro that’ll even appeal to users beyond its intended user base.

In fact, it’s unfair to look at the distro in isolation. It’s part of a much broader ecosystem and it should be evaluated in that context. The distro was designed to ship with reconditioned computers assembled by humanitarian organizations from donated pieces of hardware. It owes its name to the French Emmaus charitable movement.

Due to this, one of the main goals of the distro is to run on minimal hardware. In fact a computer with a 1.4 GHz processor and just 512 MB RAM will power the distro.

At the same time as being lightweight, the distro also needs to be attractive enough to appeal to users and intuitive enough for them to use it with ease. The distro developers also had to take into account the fact that a majority of the recipients of these refurbished machines would not have access to the Internet.

Emmabuntüs install

When you take all this into account, you’ll understand why the distro weighs in well over 3GB and requires 15GB of hard disk space to install, why it has every popular open source app there is, why it includes proprietary codecs and apps, and has a fancy dock on top of the lightweight Xfce desktop.

In fact, the everything-including-the-kitchen-sink approach to software on top of the Xfce desktop works wonderfully well. On our test machine which had a 1.4Ghz Celeron processor and 2GB of RAM, the distro performed smoothly without any hiccups.

However because of the size of the Live image, the distro takes quite some time to boot up. Running the Live distro on an older computer like our test machine will not give you a very pleasant experience.

In contrast to the 20 minute installs of modern distros, installing Emmabuntüs took me almost two hours. But not only was the system very responsive post-installation, it was chock-full of apps.

Emmabuntüs desktop

One of the first things you’ll notice about the distro are its easy to follow menus for installing proprietary add-ons. Despite being obviously written by someone whose primary language isn’t English, the menus do a nice job of educating users about non-free software.

You are greeted by three dialog boxes when you first log into the new installation. The first asks you to select one variant of the Cairo-Dock to display. It offers three choices: the default “Simple” version for new Linux users, a “Kids” version for children and a “Full” version for experienced Linux users.

Emmabuntüs prop

This is followed by the proprietary software dialog box, which first briefly explains software licensing and then shows a list of proprietary apps and codecs. The best thing is that almost all of the apps and codecs are bundled along with the distro and don’t need an Internet connection for installation.

Once the non-free components have been installed, the final dialog box lets you remove the language packs that you don’t require. By default, the distro installs support for the French, English, Spanish, Italian, German, Portuguese and Arabic languages.

Emmabuntüs dock

The distro is built on top of the rock solid Xubuntu 12.04 LTS release. The only core component of the desktop the distro has added is the Cairo-Dock. In the apps department, the distro has over 60 apps including some popular proprietary ones such as Skype, Dropbox, TeamViewer, and others.

The distros’ two web browsers, Firefox and Chromium, ship with lots of plugins to block ads and prevent phishing attacks. This helps make the distro suitable for young users who’ll also enjoy the plethora of educational apps and games, including the OOo4Kids office suite for children.

Emmabuntüs kids

There’s also LibreOffice for the grown ups. Along with several popular multimedia tools including VLC media player, Gnome Mplayer, OpenShot Video Editor, RecordMyDesktop screencasting app, K3B for burning optical media, and image viewers and editors such as GIMP, Fotoxx, Picasa, and Hugin photo stitcher.

Power users will appreciate the tons of utilities packed in the distro such as Boot Repair, BootUp-Manager, OSUninstaller, Ubuntu Tweak, UnetBootIn, VirtualBox, and an ndiswrapper front-end for installing Windows wireless drivers. There’s also Wine and PlayOnLinux for installing Windows software and games.

The desktop is clean and only includes a link to a folder that contains a variety of media, most of which is in French. For example, there’s the French-version of IBM’s “The Future is Open” advertisement, the Linux Foundation’s video celebrating 20 years of Linux in English, along with French and English versions of Jules Verne’s “20,000 Leagues Under The Sea” ebook, and some CC-licensed music by French musicians.

All said and done, Emmabuntüs will dazzle you but only if you don’t compare it with the popular regular desktop distros like Ubuntu and Fedora. The distro isn’t just a random assortment of Linux and proprietary apps, but rather a purposeful assembly of the right components to cater to the largest number of people.

Image credit: Nic McPhee

The post Emmabuntüs – A Distro Tailor-made For Refurbished Computers appeared first on Make Tech Easier.

]]> 4
Don’t Need the Keyboard Applet in Ubuntu 13.10? Here’s How to Disable It Thu, 24 Oct 2013 17:25:52 +0000 A new keyboard applet is enabled by default in Ubuntu 13.10. If you have no use for it, here's how to disable it.

The post Don’t Need the Keyboard Applet in Ubuntu 13.10? Here’s How to Disable It appeared first on Make Tech Easier.

Don't Need the Keyboard Applet in Ubuntu 13.10? Here's How to Disable ItUbuntu 13.10 has a new keyboard applet that makes switching between keyboard layouts and types easy. While it’s great to have if you like to use different keyboards and/or languages on your computer, many users may not have a need for it.

This new applet is enabled by default, but luckily you can easily disable it and remove it from your panel using these three easy steps.

1. Open System Settings.

2. Click on “Region and Language”

3. Uncheck next to “Show Current Input Source in Menu Bar”

It should now be hidden. Any time you feel like you do need it, just follow the first two steps above and check the box next to “Show Current Input Source in Menu Bar.”

via OMG! Ubuntu!

Image Credit: ?olo

The post Don’t Need the Keyboard Applet in Ubuntu 13.10? Here’s How to Disable It appeared first on Make Tech Easier.

]]> 2
How to Encrypt Files on Linux Using GPG, Ccrypt, Bcrypt and 7-Zip Thu, 24 Oct 2013 14:50:48 +0000 Linux has several different command line tools that can encrypt and decrypt files using a password supplied by the user. Let's take a detail look at some of them.

The post How to Encrypt Files on Linux Using GPG, Ccrypt, Bcrypt and 7-Zip appeared first on Make Tech Easier.

encryption-red-binarycode-and-padlock-200pxLinux has several different command line tools that can encrypt and decrypt files using a password supplied by the user. Such encryption tools have a myriad of uses, including the ability to encrypt files ready for sending securely over the Internet without the worry of third parties accessing the files if somehow the transmission is intercepted.

Before looking at the individual tools you need to make sure that all the relevant packages are installed. For Ubuntu, you would use the following command to install the programs:

sudo apt-get install gnupg bcrypt ccrypt p7zip-full

GNU Privacy Guard (GPG) is a tool primarily designed for encrypting and signing data using public key cryptography. It does however also contain the ability to encrypt data using just a user supplied password and it supports a variety of cryptographic algorithms.

To encrypt a file, in this case “big.txt“, using gpg , enter the following command:

gpg -c big.txt

You will be prompted to enter a password (twice). A new file is created during the encryption process called “big.txt.gpg“. The original file will also remain, so you will need to delete it if you only intend to keep an encrypted copy. If you compare the file sizes of the original file and the encrypted file, you will see that the encrypted file is smaller. This is because gpg compresses the file during encryption. If the file is already compressed (e.g. a .zip file or a .tgz file) then the encrypted file might actually end up being slightly larger.

To decyrpt the file use:

gpg big.txt.gpg

By default, files encrypted with gpg will use the “cast5” encryption algorithm which is approved by the Canadian government’s national cryptologic agency. However the gpg utility also supports a number of different built-in encryption algorithms including Triple DES (3DES), which is used by the electronic payment industry; Advanced Encryption Standard (AES), an encryption technique approved by the U.S. National Institute of Standards and Technology (NIST); and Camellia, a cipher jointly developed by Mitsubishi and NTT which is approved by the EU and Japan.

To see a list of the algorithms available type:

gpg --version


The list of available algorithms is shown in the “Supported algorithms” section of the output under the “Cipher” tag. To use a different algorithm add the “-crypto-algo” parameter followed by the algorithm you want to use, e.g. “-crypto-algo=3DES

The full command then becomes:

gpg -c -crypto-algo=3DES big.txt

gpg isn’t the only encryption tool available on Linux. The original Unix systems included a command called “crypt“, however the level of security it provided was very low. In its honor, there are some other commands which can replace it including “bcrypt” and “ccryrpt“.

bcrypt uses the blowfish algorithm while ccrypt is based on the Rijndael cipher, which is the algorithm used for AES. Many cryptoanalysts no longer recommend the use of the blowfish algorithm as there are some theoretical attacks published which weaken it, however for casual encryption, which won’t be subject to state-level (NSA, MI5, FSA) snooping, it is still useful.

To encrypt with bcrypt use:

bcrypt big.txt

Unlike gpg, the bcrypt command will replace the original file with the encrypted file and add .bfe to the end of the file name. Like gpg, the resulting file is also compressed and so the file size should be significantly smaller for uncompressed files. Compression can be disabled by using the “-c” parameter.

To decrypt the file use:

bcrypt big.txt.bfe

The .bfe file will be replaced by the original unencrypted file.

There are three possible ways to call the ccrypt command:

  • by using ccrypt directly with either the -e or -d options
  • to encrypt or decrypt respectively or by using the ccencrypt
  • ccdecrypt commands.

To encrypt a file enter:

ccencrypt big.txt

The original file will be replaced by big.txt.cpt. Unlike gpg and bcrypt, the output isn’t compressed. If compression is needed then tools like gzip can be used. Suggested file extensions for compressed and encrypted files are .gz.cpt or .gzc.

To decrypt a file use:

ccdecrypt big.txt.cpt

The 7-Zip compression tool also incorporates AES encryption. To create an encrypted archive use the “-p” parameter with the 7z command:

7z a -p big.txt.7z big.txt

You will be prompted to enter a password (twice). The file will then be compressed and encrypted. The original file will remain, so as with the gpg command, you will need to delete it if you only want to keep an encrypted copy. The advantage of using 7-Zip is that multiple files and folders can be archived and encrypted at once.

By using these compression techniques, sensitive data can be encrypted with sufficient strength that even government sponsored agencies won’t be able to access your files. As with all passwords (for use online or offline), using longer phrases provides better security than shorter ones.

The post How to Encrypt Files on Linux Using GPG, Ccrypt, Bcrypt and 7-Zip appeared first on Make Tech Easier.

]]> 2
Create Screencast Videos With Ease Using Kazam Tue, 22 Oct 2013 14:50:06 +0000 Screencast videos are an ideal tool to provide instructional assistance. Kazam for Linux is simple enough for new users, and offers tweakable controls to experienced users.

The post Create Screencast Videos With Ease Using Kazam appeared first on Make Tech Easier.

screencast-videos-kazam-thumbScreencasts are video recordings of your computer screen. They are an ideal tool in any computer-related instructional course. In addition to creating instructional videos, screencast videos also help you seek remote assistance by recording your actions as you retrace steps and reproduce a problem.

Not surprisingly, there isn’t any shortage of screencasting tools in Linux. What sets Kazam apart from the rest is its omnipresence across repositories of popular Linux distros, which makes it a breeze to install. Furthermore, the tool has a simple, unintimidating and intuitive user interface that helps new users get started, and offers just the right number of tweakable controls to experienced users.


To get started, install Kazam from your distro’s package manager. In Ubuntu, you can either install from the Ubuntu Software Center, click here, or with the command:

sudo apt-get install kazam

When you launch the app, you’ll notice that in addition to recording screencasts, you can also use the app to take screenshots, by switching to the appropriate tab at the top.

The app has four modes of operations. In the “Fullscreen” mode, Kazam records the complete desktop. This mode is useful for creating tutorials where you need to show the interaction between the desktop manager and the different windows and apps.

There’s also the “Window” mode which records only a particular window and is ideal for recording screencasts of an app. As you get more proficient in making screencasts, you’ll find the “Area” mode to be the most useful. This mode is ideal for including multiple apps in your screencast without recording the full desktop.

Finally, there’s the “All Screens” mode that is meant for recording screencasts across a multi-monitor setup. Unless you have such a setup, this mode will be greyed out.


When you select a mode, Kazam might ask you for additional information related to that particular mode. For example, if you select the “Window” mode, Kazam will prompt you to select the window you want to record.

All modes also share some additional options. You have checkboxes that let you select whether you wish to capture the mouse or not, which is useful for following the cursor in a tutorial.

You can also choose whether you want the app to capture audio while recording the screencast. With Kazam, you can capture audio from a microphone as well as from the speakers, and the software gives you separate options for both.

Once you have made your selection and configured these settings, click the “Capture” button. This will start a countdown to enable you to prep before Kazam starts recording the screencast.

While the screencasting is being recorded, the app minimises to the system tray. When you click on its indicator icon in the system tray, you’ll get a popup menu to either “Pause recording” or “Finish recording”.

When you finish recording the screencast, Kazam will process the video (which might take some time depending on the length of the recording and the processing power of your computer) and present a popup window with a couple of options.


The default option is to save the video. The other option will let you edit the screencast in a video editor. The app can pass the recorded video to one of the four supported video editors: Avidemux, Kdenlive, OpenShot and PiTiVi. Make sure the app you wish to use is installed on your computer.

Kazam ships with default settings that should be ideal for most users. Once you’ve created your first screencast and experienced its ease of use, you can tweak it to produce better screencasts.


To configure kazam, head to “File -> Preferences” and switch to the “Screencast” tab. One of the two most important settings on this page is the framerate setting. The default value here is 15 frames per second which keeps the size of the screencast video reasonable, but doesn’t always produce a very smooth video.

You need to spend some time before choosing the framerate for your screencast and take several things into account. For example, how much area will the screencast cover? How long will it be? What type of application are you screencasting? Does it really need super smooth mouse trails?

In my experience, 30 frames per second often gets the job done.

Another important setting you should look at is the video format for the screencast. If you want to edit the screencast in a video editor, you should go with the RAW file format and then encode it in whichever format you prefer.

If however you wish to publish the screencast as is, you can save it in either the default and the most commonly used MP4 format or the web-friendly WebM format.

After you’ve created a screencast it’s time to share it with the world. Video sharing websites like YouTube and Vimeo are filled with screencasts. You can also host the screencast on your own website. The easiest way to do this is to encode the screencast in the WebM format and then embed it using HTML 5’s <video> element.


I also discover a lot of screencasts via social media platforms such as Twitter, Facebook and Google+. You can use these channels to spread the word about your screencast. If you’ve done a screencast about an application, you can also contact its developers who can link to your screencast or even feature it on their website.

Image credit: Screencast Beta Testers

The post Create Screencast Videos With Ease Using Kazam appeared first on Make Tech Easier.

]]> 0
In Depth Look at Linux’s Archiving and Compression Commands Mon, 21 Oct 2013 14:50:10 +0000 There is more to archives than just the humble .zip. Chek out the various Linux's archiving and compression commands and how to make proper use of them.

The post In Depth Look at Linux’s Archiving and Compression Commands appeared first on Make Tech Easier.

mac-compress-folderzipThe need to pack and compress files together into a single archive has been around since computers first got hard drives, and that need has remained until today. Everything from documents and photos to software and device drivers are uploaded and downloaded as archives every day. Most computer users are familiar with .zip files, but there is more to archives than just the humble .zip. In this tutorial, we will show you the various Linux’s archiving and compression commands and the proper way to make use of them.

Historically the default archive tool on Linux is the tar command. Originally it stood for “Tape Archive”, but that was back when tapes were the primary media for moving around data. The tar command is very flexible and it can create, compress, update, extract and test archive files. The default extension for an uncompressed tar archive (sometimes called a tar file or tarball) is .tar while compressed tar archives most often use the .tgz extension (meaning a tar file compressed with GNU zip). Tar actually offers several different compression methods including bzip2, zip, LZW and LZMA.

To create an uncompressed tarball of all the files in a directory use the following command:

tar cvf somefiles.tar *

c means create, v stands for verbose (meaning the tar command will list the files it is archiving) and f tells tar that the next parameter is the name of the archive file, in this case “somefiles.tar”. The wildcard * means all the files in the directory, as it would for most Linux shell commands.

The tarball “somefiles.tar” is created in the local directory. This can now be compressed with a tool like gzip, zip, compress or bzip2. For example:

gzip somefiles.tar

gzip will compress the tarball and add the .gz extension. Now in the local directory there is a somefiles.tar.gz file instead of the somefiles.tar file.

This two step process, create the tarball and compressing it, can be reduced to one step using tar’s built in compression:

tar cvzf somefiles.tgz *

This will create a gzip compressed tarball called somefiles.tgz. The additional z option causes tar to compress the tarball. Instead of z you could use j ,J or Z which tells tar to use bzip2, xz and LZW compression respectively. xz implements LZMA2 compression which is the same algorithm as the popular Windows 7-Zip program.

linux-compression-7zipIt is possible to create a 7-Zip compatible file using the 7zr command. To create a .7z archive use:

7zr a somefiles.7z *

The a options means add, i.e. add all the local files into the archive somefiles.7z. This file can then be sent to a Windows user and they will be able to extract the contents without any problems.

Using 7zr to perform backups on Linux isn’t recommended as 7-zip does not store the owner/group information about the files it archives. It is possible to use 7zr to compress a tarball (which does store the owner information). You can do this using a Unix pipe as follows:

tar cvf - * | 7zr a -si somefiles.tar.7z

The hyphen after the f option tells tar to send its output to the Unix stdout and not to a file. Since we are using a pipe, the output from tar will be fed into 7zr which is waiting for input from stdin due to the -si option. 7zr will then create an archive called somefiles.tar.7z which will contain the tarball which in turn contains the files. If you are unfamiliar with using pipes, you could create a standard tarball and then compress it with 7zr using two steps:

tar -cvf somefiles.tar *
7zr a somefiles.tar.7z somefiles.tar

Extracting the files from these different archives is also very simple, here is a quick cheat sheet for extracting the different files created above.

To extract a simple tarball use:

tar xvf somefiles.tar

Where x means extract.

To extract a compressed tarball use:

tar xvzf somefiles.tgz

The z option tells tar that gzip was used to compress the original archive. Instead of z you could use j ,J or Z depending on the compression algorithm used when the file was created.

To extract the files from a 7-Zip file use:

7zr e somefiles.tar.7z

Linux offers a wide range of archiving and compression commands. Try experimenting with the zip, and xz commands, they work in a very similar way to the other tools mentioned here. If you get stuck you should try reading the man page, e.g. man xz for extra help.

The post In Depth Look at Linux’s Archiving and Compression Commands appeared first on Make Tech Easier.

]]> 2
Manage Dropbox in Terminal With Dropbox Uploader Wed, 09 Oct 2013 14:50:14 +0000 If you are planning to back up files to Dropbox from the terminal, Dropbox Uploader is the best solution for you.

The post Manage Dropbox in Terminal With Dropbox Uploader appeared first on Make Tech Easier.

dropbox-uploader-terminal-thumbThe good thing about Dropbox is that it has a desktop client for almost every OS and mobile device. However, if you need to access Dropbox from your server, or from a small device, like Raspberry Pi, that doesn’t allow you to install the Dropbox client, a better solution is to be able to manage your Dropbox account directly from the terminal (with command line). This is where the Dropbox Uploader script comes in useful.

Dropbox Uploader is a BASH script which can be used to upload, download, list or delete files from Dropbox. The good thing about it is that it doesn’t require you to enter your Dropbox’s username and password. It makes use of the Dropbox API to connect to your Dropbox account so you can transfer your files without worrying about the leak of your password.

There is only two things that are required for Dropbox Uploader to work: Bash (obviously) and cURL. Bash is included in almost every Linux distro, unless you removed it manually. You will need to install cURL if it is not currently intalled in your system. In Ubuntu-based distro:

sudo apt-get install curl

To install Dropbox Uploader, first grab the script from its Github site:

curl "" -o /tmp/

Then move it to the “bin” folder:

sudo install /tmp/ /usr/local/bin/dropbox_uploader

To get started, simply use the command:


For the first run, it will show you an App name and prompt you to create a Dropbox app with this app name.


Go to Dropbox Developer site and create an app.


Here are a few settings that you should set for your app:

  • Type of app: Dropbox API app
  • Type of data: Files and datastores
  • Type of files: All file types

For the app limitation, you can set it to access only files it creates, or all the files in Dropbox.

Back in the terminal, enter the App Key and secret and visit the Dropbox authorization link to grant Dropbox Uploader permission to access your Dropbox account. Once you linked up Dropbox Uploader to your Dropbox account, you will be able to manage your Dropbox in the terminal.

The usage is pretty simple. There are 10 commands that you can use with Dropbox Uploader:

  • upload
  • download
  • delete
  • move
  • copy
  • mkdir
  • list
  • share
  • info
  • unlink

To upload a file/folder, use the syntax:

dropbox_uploader upload /filepath/to/file-or-folder /filepath/in/dropbox

If the “filepath/in/dropbox” is omitted, the file(s) will be uploaded to the topmost level of your Dropbox account.

To download a file/folder,

dropbox_uploader download /filpath/to/file-or-folder/in/Dropbox /filepath/in/local/machine

To list down all the files in a folder in your Dropbox account,

dropbox_uploader list /filepath/to/folder/in/Dropbox

To grab the public link for a particular file in Dropbox,

dropbox_uploader share /filepath/to/file/in/Dropbox

Dropbox Uploader provides a good way for you to access and manage your Dropbox account directly from the terminal. It is particularly useful for web administrators to use in their server. Coupled with a simple backup script and cron job, you can easily automate server backup to Dropbox.

The post Manage Dropbox in Terminal With Dropbox Uploader appeared first on Make Tech Easier.

]]> 0
Mastering the “Kill” Command in Linux Tue, 08 Oct 2013 21:25:01 +0000 When an app misbehave, it is best to kill it before it crashes the system. Check out the various ways you can use to kill a process in Linux.

The post Mastering the “Kill” Command in Linux appeared first on Make Tech Easier.

kill-linux-process-thumbIt doesn’t matter which operating system you are using, you will surely come across a misbehaving application that lock itself up and refuse to close. In Linux (and Mac), there is this “kill” command that you can use to terminate the application forcefully. In this tutorial, we will show you the various way you can make use of the “kill” command to terminate an application.

When you execute a “kill” command, you are in fact sending a signal to the system to instruct it to terminate the misbehaving app. There is a total of 60 signals that you can use, but all you really need to know is SIGTERM (15) and SIGKILL (9).

You can view all the signals with the command:

kill -l


  • SIGTERM – This signal requests a process to stop running. This signal can be ignored. The process is given time to gracefully shutdown. When a program gracefully shuts down, that means it is given time to save its progress and release resources. In other words, it is not forced to stop.
  • SIGKILL – The SIGKILL signal forces the process to stop executing immediately. The program cannot ignore this signal. Unsaved progress will be lost.

The syntax for using “kill” is:

kill [signal or option] PID(s)

The default signal (when none is specified) is SIGTERM. When that doesn’t work, you can use the following to kill a process forcefully:



kill -9 PID

where the “-9” flag refers to SIGKILL signal.

If you are not aware of the PID of the application, simply run the command:

ps ux

and it will display all the running applications together with its PID.


For example, to kill the Chrome app, I will run the command:

kill -9 3629

Do also note that you can kill multiple processes at the same time.

kill -9 PID1 PID2 PID 3

The “pkill” command allows the use of extended regular expression patterns and other matching criteria. Instead of using PID, you can now kill application by entering their process name. For example, to kill Firefox browser, just run the command:

pkill firefox

As it matches regular expression pattern, you can also enter partial name of the process, such as:

pkill fire

To avoid killing the wrong processes, you might want to do a “pgrep -l [process name]” to list the matching processes.


Killall uses the process name as well instead of PID, and it kills all instance of the process with the same name. For example, if you are running multiple instances of Firefox browser, you can kill them all with the command:

killall firefox

In Gnome, you can restart Nautilus by using the command:

killall nautilus

Xkill is a graphical way to kill an application. When you type “xkill” in the terminal, your mouse cursor will instantly become a “cross”. All you have to do is to click the “cross” at the misbehaving app and it will kill the application instantly. If you are keen, you can also add a keyboard shortcut to activate the xkill function.

When apps misbehave and cause the system to hang, it is very tempting to restart the computer and start the session all over again. With these “kill” commands, you will be able to better manage the misbehaving apps without them causing the system to crash. This is especially useful for a server when you don’t want a misbehaving process to bring the whole server down.

Image credit: Kill Bill (Gates)

The post Mastering the “Kill” Command in Linux appeared first on Make Tech Easier.

]]> 2
Install Google Fonts in Ubuntu with TypeCatcher Mon, 07 Oct 2013 14:50:32 +0000 Instead of manually download and install Google fonts on your system, you can now make use of TypeCatcher to easily install Google Fonts in Ubuntu.

The post Install Google Fonts in Ubuntu with TypeCatcher appeared first on Make Tech Easier.

typecatcher-thumbIf you are looking for free custom fonts to use in your creative projects or website, Google Fonts come with a huge library of fonts that you can use for free. Google Fonts also allows you to download the fonts to your desktop so you can use it offline. To download the fonts, you just have add the fonts you want to your collection and then click the Download button to download the font. In Ubuntu, there is now an easier way to install Google Fonts to your desktop.

TypeCatcher is a simple desktop software that allows you to download Google Fonts to your desktop. What is good about TypeCatcher is that it automatically install the font to your system so you can use it in your application immediately. There is no need for you to manually install the font, which is required if you choose to download the fonts manually from Google Fonts site.

To install TypeCatcher, simply add the PPA, update the system and install it:

sudo add-apt-repository ppa:andrewsomething/typecatcher 
sudo apt-get update
sudo apt-get install typecatcher

TypeCatcher is very easy to use. The main screen is split into two panels. The left panel lists all the fonts in the Google Fonts library while the right panel shows the preview of the font. Select any font on the left and the preview will show up on the right.


Click the “down” arrow at the right side of the icon bar and you will be able to choose the text that show up in the Preview. You can, of course, select the “Custom text” option and type in your own text to see how it renders with the font. You can change the size of the font as well. There is no option to view the various style (italic, bold etc) of the font though.


When you found the font you want, click the “Download” icon at the top and TypeCatcher will download the font and automatically install it for you. In case you are wondering where the downloaded font is saved to, it is located at ~/.fonts/typecatcher folder.


If you want, you can also click the “Trash” icon to uninstall the font from your system.

TypeCatcher provides an easy way for you to download free fonts from the Google Fonts library. It will be great if it can install multiple fonts at the same time and also sync the changes when there is an update in the font. For those who needs to work with custom fonts for their projects, this will be a useful for you. One advice though, do not install all the fonts to your system as it will slow down your system.

The post Install Google Fonts in Ubuntu with TypeCatcher appeared first on Make Tech Easier.

]]> 0
What to Expect From Ubuntu 13.10 Final Beta Sat, 05 Oct 2013 14:50:40 +0000 It's the time of the year again where Canonical is going to launch a new version of its popular Ubuntu distro. You may be wondering if the upcoming release, Saucy Salamander 13.10, is a worthy upgrade, or one that you can skip altogether. Let's check out ubuntu 13.10 Final Beta and see what's new and if it is worth upgrading.

The post What to Expect From Ubuntu 13.10 Final Beta appeared first on Make Tech Easier.

ubuntu-salamande-logoIt’s the time of the year again where Canonical is going to launch a new version of its popular Ubuntu distro. For those who are using Ubuntu Raring 13.04, or the LTS Ubuntu Precise 12.04, you may be wondering if the upcoming release, Saucy Salamander 13.10, is a worthy upgrade, or one that you can skip altogether. With the release of 13.10 Final beta, let us check out what’s new and if it is worth upgrading.

If you are keen to try out Ubuntu 13.10 Final Beta, you can download the ISO file here.

1. Mir and XMir

The biggest changes in Ubuntu 13.10 is the implementation of Mir, the new display server to replace the old X window system. As an effort to unify both the desktop and mobile experience, Canonical created their own display server – Mir to replace the old and outdated X system. As a fallback, there is also the XMir – a X11 implementation of Mir – which will allow the system to fallback to the X system if the graphics card does not support Mir.

At the moment, Mir will be the default for Ubuntu Touch (Ubuntu OS for phone). While it was planned to be the default display server for Ubuntu 13.10 desktop, the plan was later scrapped as there are still compatibility issues with XMir that won’t be fixed in time for the Oct 17 launch. We will most likely be seeing it for Ubuntu 14.04.

2. Smart Scope

Smart Scope is a new feature that allows you to search (almost) anything from the Dash. If you are connected to Google and Facebook, it can also search your Facebook message or Google documents. Other than that, it is also connected to services like Wikipedia, Yelp, Amazon, Foursquare so you can search a multitude of services directly from the Dash.


3. Unity 7

The Unity desktop has been bumped up to version 7. There are many more fine-tuning with Compiz that makes it run faster and smoother.

4. Keyboard Applet

The most obvious change in the desktop is the inclusion of a new Keyboard applet. This allows you to easily switch between various language and keyboard layout.


5. Applications

There hasn’t been much changes to the Applications. Firefox still remains the default browser, together with Thunderbird, Rhythmbox, LibreOffice

6. Visual Update

Some of the applications, such as Files (Nautilus) and System Settings, received a small visual change.


Most of the work in Ubuntu 13.10 are on the backend to create a unified interface for both mobile and desktop. Few changes are made to the visual and applications. Smart scope may be a new inclusion in 13.10, but it has been around since 13.04, so it is not exactly a new feature. In my opinion, this is not a compelling and a “must-upgrade” release. You won’t miss out much by remaining in Ubuntu 13.04.

Ubuntu 13.10 will be released on Oct 17, 2013.

The post What to Expect From Ubuntu 13.10 Final Beta appeared first on Make Tech Easier.

]]> 3
Reverse Engineer and Analyze Malware with REMnux Tue, 01 Oct 2013 14:50:26 +0000 If you are one of those who are curious about how malware works, there is a Linux distro that comes with all the necessary tools for you to analyze malware.

The post Reverse Engineer and Analyze Malware with REMnux appeared first on Make Tech Easier.

remnux-logoGetting infected by malware is easy. You just have to open a suspicious file, or visit a malicious website, and boom, your computer is infected. On the other hand, analyzing and reverse engineering malware is a much difficult task that only experts can do with specialized tools. If you are one of those who are curious about how malware works, there is a Linux distro that comes with all the necessary tools for you to analyze malware.

REMnux is a lightweight Linux distribution that allows you to carry out malware analysis, or even reverse-engineer the malware to find out how it works.

REMnux is best used in an isolated environment, such as virtual machine or Live CD, so that the malware won’t hurt the main machine. It comes in the OVF/OVA format where you can easily import into your virtual machine like VirtualBox or VMware. There is also an ISO image where you can burn into a CD and boot it up on your computer.

REMnux is based on Ubuntu and it comes with LXDE desktop, mainly because of its small memory footprint. On the first run, you might have no idea what REMnux is capable of doing and what type of tools is included. Checking out the application menu is not helpful either as most of the tools are command-line based and doesn’t show up in the menu. A good way to get started is to go through the “REMnux Tips” in the desktop. This will give you an overview of what REMnux can do and the instructions to carry out the analysis.


Analyze Network Malware

There are several network related tools in REMnux that allows you to easily scan the network for malware activities. Wireshark is a network protocol analyzer and it is perfect for viewing your network activities at a microscopic level. Honeyd, stunnel and FakeDNS are useful for creating virtual containers to simulate an infinite number of computer network and set the perfect testbed for malware analysis.


Analyze malicious website

The Firefox browser in REMnux comes with many useful extensions pre-installed to help you analyze malicious website. Firebug, javascript deobfuscator, tamper data and user agent switcher are some of them that make it easy for you to check out the working of a malicious site.


Analyze malicious files

If you have a PDF file, or Microsoft Office document that you suspect was infected, you can scan the documents with tools like PDF Walker, pyOLEScanner etc. There is also the PEScanner and SCTest for scanning executable files and shellcode.

The Volatility Memory Forsenic Framework is also included in REMnux and can give you an insight of the runtime state of the system. It can spot hidden processes, list all processes, show a registry key, or even find and extract malware.

The good thing about REMnux is that it contains most of the tools you need to analyze PDF, Flash, Javascript and other malware. You can of course install those tools on your current distro, but that will require a lot of time and configuration. With REMnux, you just boot it up and you can run it straight away. One thing though, REMnux is not meant for everyone. Be prepared to get your hands dirty as most of the tools are command-line based.

The post Reverse Engineer and Analyze Malware with REMnux appeared first on Make Tech Easier.

]]> 0
How to Install Viber in Linux (Ubuntu) Mon, 30 Sep 2013 14:50:58 +0000 You probably have used Viber on your mobile phone to make free phone call. It had recently released a desktop version which allows you to make free phone call from the desktop. Here is how you can install Viber in Linux.

The post How to Install Viber in Linux (Ubuntu) appeared first on Make Tech Easier.

viber-logoIf you own a smartphone, you probably have used Viber on your phone to make free calls. Like Skype, Viber is a VOIP (Voice Over Internet Protocol) client that allows you to make free voice call over Internet. It used to be only available for mobile devices, such as Android, iOS, Windows Phone, Blackberry etc, but it had recently released desktop versions which allows you to install and make phone call from the desktop. The bad thing is, there is only a Windows and Mac version for now. In this tutorial, we will show you how to install Viber in Linux.

Note: While this tutorial is done in Ubuntu, the installation instruction should apply for most Linux distros.

1. On the Viber website, you can only find the Windows and Mac installer. The Linux version, is still a work in progress, and you can download it here.

2. Unzip the archive and open the “Viber” folder. You should see an executable file named “Viber”. Click on it and it should launch Viber. No installation required.


3. On the first run, Viber will ask if you have Viber on your mobile phone. If yes, that means you already have a Viber account. If not, it will run you through the wizard to setup an account.


4. To get started, simply enter your mobile number and it will set you up with your user account.


For verification, it will send to your mobile phone an activation code that you will need to enter in Viber.


5. Once you are logged in, this is what you will see in the main screen. The navigation bar on the extreme left, your contact list in the center and the main screen on the right.


You can then select any contact to send SMS, picture, make a voice call or even a video call (beta). Like Skype, you can also add participants to your call session and turn it into a group chat.

6. On the navigation bar is a list of options, including Message archive, Contacts, History and a Dialer. At the bottom of the bar is the Settings.

7. The Settings allows you to configure whether Viber should auto-start on system startup, save message when exiting a conversation etc. Viber also comes with an applet for your system tray, so you can receive notification for incoming call and messages. You can even configure which corner of the screen the notification will appear.


Unlike most social networks or IM that require you to create an account and manually add your friends to your contact list, Viber makes use of your address book and automatically add all your contacts into your friend’s list. If you are not comfortable with Viber accessing your address book (and probably upload all your contact info to their server), then this is not a tool you want to use. On the other hand, Viber does provide a great alternative to Skype, enabling you to make free call over the Internet, with a rather good call quality too. One of the things that it could (and should) do is to allow users to create account with a username rather than linking the account with their mobile numbers. One thing for sure, most people will not use the same mobile number forever and the last thing you want to have is plenty of inactive and useless accounts.

Try it out and let us know if this is your preferred choice of VOIP client.

The post How to Install Viber in Linux (Ubuntu) appeared first on Make Tech Easier.

]]> 6
How to Mount Amazon S3 in Ubuntu Thu, 26 Sep 2013 14:50:24 +0000 In Ubuntu, you can easily access Amazon S3 via the various software, such as S3Fox or Dragon Disk. However, in a server situation, you won't have the luxury of using a desktop software. In this tutorial, we will show you how you can mount Amazon S3 in Ubuntu, be it desktop or server.

The post How to Mount Amazon S3 in Ubuntu appeared first on Make Tech Easier.

mount-s3-ubuntu-thumbAmazon S3 is a useful web service that allows you to store files for cheap. In Ubuntu (desktop), you can easily access Amazon S3 via the various software, such as S3Fox or Dragon Disk. However, in a server situation, you won’t have the luxury of using a desktop software. In this tutorial, we will show you how you can mount Amazon S3 in Ubuntu, be it desktop or server.

Note: This whole tutorial will be done in the terminal

1. To get started, first install the dependencies.

sudo apt-get install build-essential gcc make automake autoconf libtool pkg-config intltool libglib2.0-dev libfuse-dev libxml2-dev libevent-dev libssl-dev

2. Next, download riofs. This is a userspace filesystem for mounting Amazon S3. (S3FS is another fuse module that you can use, but it is very buggy and I can’t get it to work properly.)


Alternatively, if you are using GIT, you can checkout its Github page for more detail.

3. Extract the file:

tar xvzf

You should now find a “riofs-master” folder.

4. Enter the “riofs-folder” folder and compile it.

cd riofs-master
sudo make install

To mount Amazon S3 in Ubuntu, you have to make sure that you already have bucket(s) available for mounting. Also, get your S3 security credential (Access ID and Secret Access key) ready as they are required for authentication.

1. Before we can mount our bucket, we have to configure the configuration file for riofs. In your terminal:

mkdir ~/.config/riofs
sudo cp /usr/local/etc/riofs.conf.xml ~/.config/riofs/riofs.conf.xml

This will copy the default configuration file to your local folder. You can change the destination folder if you want to.

Next, we need to add the security credential to the configuration file:

nano ~/.config/riofs/riofs.conf.xml

Scroll down the page till you see the AWS_ACCESS_KEY section.


Uncomment that section and replace “###AWS_ACCESS_KEY###” with your access key and “###AWS_SECRET_ACCESS_KEY” with the secret key.

Save (Ctrl + o) and exit (Ctrl + x).

2. Change the permission for the riofs.conf.xml file.

chmod 600 ~/.config/riofs/riofs.conf.xml

3. Create a directory (preferably in your Home folder) that you can mount Amazon S3 to.

mkdir ~/S3

4. Lastly, mount your Amazon S3 bucket to the S3 directory.

riofs -c ~/.config/riofs/riofs.conf.xml my_bucket_name ~/S3

To check if your bucket is successfully mounted, just list all the files in the mounted directory:

ls ~/.S3

There are a few things that you can set in riofs.


  • --cache-dir: set a cache directory to minimize downloads
  • -o "allow_other": allow other users to access your bucket. You will need to enable the “user_allow_other” option in the fuse config file (/etc/fuse.conf)

For those who are using Amazon S3 to store your files, it is very useful to have your buckets mounted on your computer (or server) so you can easily access it. You can even configure your backup application to save the backup files to this folder where they will automatically sync to the cloud.

The post How to Mount Amazon S3 in Ubuntu appeared first on Make Tech Easier.

]]> 1