Rabbitmq standalone and cluster installation

  • Install rabbitMQ in the VM. Following are the installations steps.
    ·         Verify if the earlang package is installed
  • rpm -q erlang-solutions-1.0-1.nonarch.rpm
  • wget http://packages.erlang-solutions.com/erlang-solutions-1.0-1.noarch.rpm
  • sudo wget http://packages.erlang-solutions.com/erlang-solutions-1.0-1.noarch.rpm
  • sudo yum update NOTE : use command “yum –releasever=6.7 update” if you want a specific version.
  • su -c ‘yum list rabbitmq’   Or use
  • yum install rabbitmq-server
  • sudo rpm -Uvh http://www.rabbitmq.com/releases/rabbitmq-server/v3.6.0/rabbitmq-server-3.6.0-1.noarch.rpm
  • sudo /etc/init.d/rabbitmq-server start·
  • Uncomment the loopback line in security section of rabbitMq.config :  {loopback_users, []}ss
  • rabbitmq-plugins enable rabbitmq_management·
  • Configure port firewall rule should be in place to accept the tcp connection.
  • Use following command : lokkit –p <rabbitMQ port>:tcp , lokkit –p <rabbitMQ management port>:tcp·
  • Default guest/guest account should be disabled. Change the user and user permissions using following commands :
  • Note : password should be 16 characters , no special characters allowed and should be generated by keypass.
  • rabbitmqctl set_user_tags <username> administrator      rabbitmqctl change_password guest guest123
  • Disable the guest user by changing the password once the created user is tested.
  • rabbitmqctl add_user <username> <password>
  • Avoid use of RabbitMQ default port and configure to use our own choice. Edit the port in rabbitMq.config file. uncomment following line and edit the port : {tcp_listeners, [<rabbitMQ port>]} and {listener, [{port,    <rabbitMQ management port>}.
  • Install management console of rabbitmq using following command :
  • Copy  /usr/share/doc/rabbitmq-server/ rabbitmq.config.example in /etc/rabbitmq folder and rename it as rabbitmq.config. Edit the permissions for the file to: 666
  • sudo chkconfig rabbitmq-server on
  • sudo rpm –import http://www.rabbitmq.com/rabbitmq-signing-key-public.asc
    for rabbitmq 3.6.*  ,require socat dependency:
    steps : sudo yum install epel-release
    sudo yum install socat
  • sudo yum install -y erlang-18.2-1.el6
  • sudo rpm -Uvh erlang-solutions-1.0-1.noarch.rpm
  • Install erlang package:
  • dowload the erlang package from web site:
  • Restart the rabbitmq server using commnad : sudo service rabbitmq_server restart.
  • Make the following changes on rabbitmq console:  Got to Admin > click on user and click on set permissions. Check the permissions of the user. It should be same as user guest.
  • Try to create new queue to check it is working fine.


Create RabbitMQ High Availability Cluster:

1) Stop RabbitMQ in Master and slave nodes. Ensure service is stopped properly.

/etc/init.d/rabbitmq-server stop

2) Copy the file below to all nodes from the master. This cookie file needs to be the same across all nodes.

$ sudo cat /var/lib/rabbitmq/.erlang.cookie

3) Make sure you start all nodes after copying the cookie file from the master.

Start RabbitMQ in master and all nodes.

$ /etc/init.d/rabbitmq-server start

4) Then run the following commands in all the nodes, except the master node:

$ rabbitmqctl stop_app$ rabbitmqctl reset$ rabbitmqctl start_app

5) Now, run the following commands in the master node:

$ rabbitmqctl stop_app$ rabbitmqctl reset

6) Do not start the app yet.

Open port 4369 and 25672: lokkit -p 4369:tcp -p 25672:tcp

Stop the iptables on both master and slaves.

The following command is executed to join the slaves to the cluster:

$ rabbitmqctl join_cluster rabbit@slave1 rabbit@slave2

Update slave1 and slave2 with the hostnames/IP address of the slave nodes. You can add as many slave nodes as needed in the cluster.

7) Start master app in master machine

$ rabbitmqctl start_app

8) Check the cluster status from any node in the cluster:

$ rabbitmqctl cluster_status

9) In rabbitmq management console check if you can login with previous user and have all the previous settings in place.

If not create users by following command:

rabbitmqctl add_user <username> <password>

give admin rights:

rabbitmqctl set_user_tags <username> administrator

rabbitmqctl add_vhost /

Give vhost rights by:

rabbitmqctl set_permissions -p / <username> “.*” “.*” “.*”

10) Create ha mirroring by:

rabbitmqctl set_policy ha-all “” ‘{“ha-mode”:”all”,”ha-sync-mode”:”automatic”}’This will mirror all queues.

11) Now start iptables. You will have created rabbitmq HA cluster.

How Bitcoin Works

As interest and coverage continues to rise on the topic of Bitcoin and cryptocurrency, there is an increasing number of investors that are completely dubious to the “under-the-hood” mechanics of the technology. While a ‘complex’ understanding is typically left for the software-engineers and experts alike, a dumb-downed version can easily explain all the intricate complexities of Bitcoins; that is, without leaving much room for questions or concerns on the legitimacy of the rising cryptocurrency market.

That is what this article will concentrate on. Explaining Bitcoin a bit further from the usual “it’s a ledger… kept by everyone!” notion.

Quick recap on Bitcoin’s origin and its price history.

Bitcoin was introduced back in 2007, when an individual or group of people by the pseudonym of Satoshi Nakamoto published a detailed white-paper, a proof-of-concept explanation, of a currency that had the potential to be both decentralized and self-governed. The paper is still officially hosted on http://bitcoin.org, and can be accessed and read by anyone with an understanding of cryptology and mathematics.

From the original white paper to 2013, the cryptocurrency gained some traction and found itself at an all-time high of $1,165 per bitcoin on November 30, 2013. Soon after, due to some controversy surrounding 850,000 coins (priced at more than $450 million) disappearing from the biggest exchange at the time, Bitcoin took a hard plummet to less than $200 per coin.

After the plummet in 2013, Bitcoin slowly began rising back to the $1,000 mark that it had crossed, and it successfully did so in January of 2017.

Following January, Bitcoin took another small dive due to news from China (one of Bitcoins main user) stating the country was closing its door on the cryptocurrency. This was eventually found to be false, and Bitcoin quickly found its way back to its usual prices and where we currently stand.

While there has been more news that has affected the price of Bitcoin post misinformed China-ban news, there has not yet been another incident as price-drastic as the two mentioned.

If you would like some more in-depth reading of the above, I recommend checking out the following resources:

Bitcoin and how it works.

The magic behind Bitcoin is not hard to explain, but for the bigger picture to be understood there ought to be an understanding of three preceding technologies that are crucial to the Bitcoin environment. Those three technologies being peer-to-peer, hashing, and signature algorithms.

Peer-to-peer (P2P)

Popularized in 1999 through Napster, peer-to-peer is a method of sharing information through a decentralized network.

When you access any website on the internet, that website is being hosted and projected by another computer within the world. This means that your computer sends a request through the World Wide Web over to another machine, and asks ‘hey, can you send me your website?’ This query is quickly realized and accepted, and a website like Google.com will send back the page requested. This is what is called the client-server model.

Client-server model.

The client-server model is the usual method in which your computer communicates with the rest of the internet, but the issue relies on the fact that the server, which processes queries, is centralized and bound by issues such as bandwidth allocation — the server can only send the file requested at a pace noted by its upload rate, so instead of hauling the process for many, it accommodates all by decreasing the upload speed to each user. In the case of the user, the file gets downloaded at a slower pace.

In the realms of website hosting, the client-server model is not that big of an issue. A website is rather light in weight, and the client-server model is perfect at delivering sections of a website in quick pulses to requesting users. The main issue is when the file is larger in size, as that of a movie or a music file — in the case of Bitcoin, a ledger.

The whole notion of the client-server model was seen faulty for such applications, especially for Napster. First of all, someone would have to store illegal movies or music on a centralized server, and second of all, too many people accessing the server would cause major slowdowns in download speeds. The following issues had to be fixed:

  • Centralization — How can we make sure we don’t store the files?
  • Download-speed — How can we make sure download speed is kept at full?

And thus P2P was introduced, allowing people to download a file from all others who owned the file:

Peer-to-peer network.

Peer-to-peer allowed anyone to download files at much faster rates, from different sources at the same time. This means that the bottom left computer on the image above, would be downloading pieces of a file from all computers it is connected to. While it’s downloading, it would also serve as a uploading hotspot for other computers lacking the pieces in which it had already downloaded. When the computer was done downloading the whole file, it became another node to download the file at full.

Peer-to-peer created a decentralized system that allowed quick downloads without much concern for the end-user or the original uploader of the file. Once the file was uploaded and downloaded, it could quickly spread as it began downloading and uploading by all individuals who now had the file on their machine.

This is the technology of peer-to-peer. This is one of the fundamental workings of Bitcoin, and we will proceed to the next important piece: hashing.


The first designs of hash functions were introduced in the late 1970’s, and in today’s world there are few accepted hash functions that are known to be secure and viable. Hashing is the term utilized for putting any kind of information through a hash function and receiving an output that cannot be traced back to its original content.

In a normal mathematical function, a user places a value within the designated function and receives a solution in return.

Let us take y = x + 1 . Make x = 10 and we have the y value of 11 ( y = 10 + 1 = 11 ). A normal mathematical formula, though, can be easily reversed engineered to find that y — 1 = x and thus 11 — 1 = x = 10 , or the original value of x.

The issue is that if given the value of y, a user can easily reverse the value back to x. The question forms: How can we send the value of y , without it being able to be reverse engineered? This is where hash functions come into play.

Hash functions are one-way functions. This means that when an input of x is made, the returning output is one that cannot be traced back — even if the user holds the formula which led to the x value eventually becoming y. The mathematics behind hash functions will greatly differ depending on the specific hash function. All hash functions, however, have few things in common:

  • Same inputs will always output the same result. This means that if you input x = 10 , the value for y will never be different if the function is run multiple times with x = 10. It will always output the same value.
  • Outputs are always the same size. This means that no matter if the input is 5,000 characters or 1 character, the output will always be the same length — designated by the specific hash function.
  • Majority of hash functions (including the one utilized within Bitcoin) will have what is called the ‘avalanche effect’, in which any slight change to an input (for example, adding a single ‘a’ to the end of ‘hello’) changes the output completely.

Hash functions are widely used in database and password storage. When you sign-up for a website, proper websites will hash your password and store the hash result within their database. When you login at a future date, your password gets hashed and the hashed password is compared with the hash stored in the database. This prevents, in the incident that the database is hacked and leaked, that your password is found in plain text.

Here are some examples of SHA-256, a widely used cryptographic hash function that is also found in Bitcoin:

hello : 2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824
a : 
helloa : 

As you can see, there are no recognizable patterns from hashing that allows one to distinguish the word hello, the single letter a, or the combined helloa. This is because hashing is made specifically for that; a lack of distinction. Also notice how SHA-256 produces a 256-bit (character length) output, no matter the input’s length.

The last piece of this puzzle is the signature algorithm.

Signature Algorithm (Digital Signatures)

Introduced for the purpose of verifying digital documents, a signature algorithm is utilized to verify the sender of a message. In an offline world, a person’s signature is a representation of a person’s validity and agreement. If you sign a message in the offline world and mail it to someone, that signature is typically enough to prove the sender of the document.

The issue is that within the online world, a signature can easily be copied. Signing a document with the written words “~ Felipe Faria” is not very secure. Anyone with the ability to type “~ Felipe Faria” could claim to be someone they are not (unless their name is also Felipe Faria, which also creates conflicts), and so the question arises; “how do we create a viable signature system online?”

The answer is a signature algorithm like ECDSA. ECDSA stands for Elliptic Curve Digital Signature Algorithm, and through some high-math we are able to verify the integrity of a document through the use of a public and a private key.

The way in which everything works is rather simple. When a user generates an ECDSA pair (public and private key), the user is effectively generating a key that can sign a message and another key that can verify the signature.

The private key effectively ‘signs’ the document, and the public key is utilized to verify the document. As the name suggests, a private key is kept by the user (and never shared), and the public key is given to verify the sender.

Keep in mind that a public key is exclusive to one private key. This means that when pairs are generated, the private and public keys are mathematically linked together in a way that allows a private key to output the public key, but with just the public key it is impossible to reverse back to the private key. This means I can send someone a public key that is known to be linked to me, a message in which I would like to be read, and a signed message to basically tell the user at the other end: “hey, I just send you a document. It’s me, Felipe Faria.”

So let us generate some key pairs for ourselves:

Private Key: a055b7929836f686932ee2b952da2e6a45ebe9209995ecee8fe3e00dca4823c1
Public Key:

Now these two keys are used in conjunction with each other. We can utilize the algorithm to sign a message with our private key, for example Hello , and we receive the following signed message:

Signed Message:

The message Hello is signed with our private-key, and an output is given as the signed message (which is really a signature specific to the message, not the messaged encrypted). Now we can send another user our public key, the original message, and our signed message, and that user can utilize our public key to verify that the message was signed by us.

You can actually go to the website above (leave all configurations as they are), place the public key, the message, and the signed message, and verify the signature. You should receive a “valid ECDSA signature” dialog. Modify any section of the original Hello , public key, or signature, and you will receive an invalid dialog.

The essence of a signature algorithm is to simply verify the sender of the message by creating a signature generated by the message and a private key, that can be verified by a public key that is linked to the signing private key.


  • Peer-to-peer allows the downloading and uploading of files without the hassle of a main centralized server. It allows a file to be kept and downloaded from multiple machines.
  • Hashing allows any input to become an irreversible output. This allows passwords, for example, to be stored in a safe manner. Hash a password and store it. When a user attempts to access your website, all one has to do is hash their password and send it to the server as proof that they are the user they claim to be.
  • Signature algorithm allows a file or string to be signed by a private key and verified by a linking public key, effortlessly proving that the private key signed the original message.


Now that we have peer-to-peer, hash, and digital signatures explained, a more fundamental understanding can be made about Bitcoin. For that, I’ve cut the explanation of Bitcoin into different sections:

  • Network and Roles: How all computers within the Bitcoin network communicate (users, miners, nodes), and their specific jobs.
  • Wallets: How a wallet works and how coins are kept safe.
  • Transactions: How a transaction is created and sent into the network.
  • Mid-Review: Quick review on how wallets, transactions, and nodes interact.
  • Mining: What mining is and how it keeps the network alive.

We will tie everything together in the next section, the conclusion.

Network and Roles

Within the Bitcoin network, there are three important (and distinctive) entities:

  • Users: Individuals who simply own and give value (through supply and demand) to Bitcoins. I will go into more depth on how Bitcoins are stored within the wallet section, and how they are spent within the transactions section.
  • Miners: Individuals who purposely mine Bitcoins by utilizing massive computer power to literally ‘guess’ a hash puzzle (explained further under mining). When a hash puzzle is solved, 12.5 BTC (yes, 12.5, currently at a value of a bit over $13,500) is rewarded, and transactions are considered verified along the way.
  • Nodes: Broadcasters of messages within the network. When a user creates a transaction it relays it to a node, who verifies the transaction first and then creates a list of unverified transactions in which miners can pull from. Miners, who then mine the selected transactions, send these finalized (and confirmed transactions) back to the nodes. Nodes will take this information and place all verified transaction on the public ledger. Nodes also communicate among themselves to verify they are all kept up-to-date on the information they store.

In the bigger picture, this is what’s occurring when a user sends a transaction to the network:

  1. User will specify another wallet to send Bitcoins to.
  2. The user’s transaction is broadcasted to a node, who verifies the integrity of the transaction and then places it in a list of unverified transactions (called a block) that gets broadcasted to miners.
  3. Miners receive a block, and through a hashing game they generate new Bitcoins and relay to nodes the completed puzzle.
  4. Nodes receive a message that a miner has been able to solve the hashing puzzle, they verify the integrity of the puzzle and place all unverified transactions into the verified transaction list, known as the blockchain.


For anyone to securely own a Bitcoin, they must have a wallet. Wallets are typically third-party (open source) software that keeps track of a user’s Bitcoin. The intricacy of a wallet is formed by the last piece of technology we discussed, the signature algorithm.

The infamous ledger.

When a Bitcoin wallet is created, a ECDSA pair is also generated. This ECDSA pair, as we discussed above, has a private and a public key:

Private Key: a055b7929836f686932ee2b952da2e6a45ebe9209995ecee8fe3e00dca4823c1
Public Key:

Utilizing our public key and some cryptographic math, the wallet automatically formulates our public key into a Bitcoin address:

Public Key:
Bitcoin Address:

This Bitcoin address is now our identity within the network. The wallet will now contain the following information:

Private Key: a055b7929836f686932ee2b952da2e6a45ebe9209995ecee8fe3e00dca4823c1
Public Key:
Bitcoin Address:

Wallets do not need to tell nodes that a new address has been created. Nodes simply assume that every possible Bitcoin address exists, and that as long as someone owns the private key to a Bitcoin address they are free to send transactions. This means that yes; you can send Bitcoins to a wallet that has never actually been generated.

The same wallet being generated twice is mathematically improbable. There can be a total of 2¹²⁸ wallets generated, that’s the following number:


When a transaction is created through the wallet, the wallet will sign the transaction using its private key to tell anyone who decides to look at the transaction: “Yes, I am sending this amount of Bitcoins over to address x.”

Wallets also store specific values of incoming transactions. This means that when a Bitcoin is received, said Bitcoin is stored with information regarding its past owners. This is a major point to the Bitcoin network because it allows every single Bitcoin to be traced back to its origins (expanded more on transactions).


Now that we have a wallet with its corresponding private and public key, we can send and receive transactions. Transactions are based on two things:

  • Inputs: Reference to an output from a previous transaction.
  • Outputs: Contains instructions for sending bitcoins.

Transactions will look like so:

Each rectangular block is its own transaction. In each transaction, the following is held:

  • The transactions content (expanded bellow; inputs and outputs).
  • The signature of the creator of the transaction — the signature is based on the transaction’s content.
  • The public key of the creator of the transaction to verify the signature, created based on the content.

As the diagram shows and indicates, every transaction must be met by a preceding one. The transactions content is both the inputs and outputs of the said transaction.

The input(s) is the past transaction that led the Bitcoin to the sender’s wallet, and the output(s) is the receiver of the Bitcoin, and possibly the sender in special circumstances. In the case of input(s), Bitcoins are stored in its final value, so let us say 1.0 BTC. If an owner of a wallet owns that 1.0 BTC and is attempting to send 0.2 BTC over to friend B, the outputs of said transaction would have to be the following:

  • 0.8 BTC to the original sender.
  • 0.2 BTC to friend B.

Effectively, this system creates a tracking system that never allows the spawn of random Bitcoins within the network. This means that all Bitcoins must be traced back to its origins (we will discuss how Bitcoins ‘spawn’ within mining).

Transactions are based on two principals: verification, and content. The verification of a transaction is done by the signing of the content utilizing a private key, and then placing the signature and the corresponding public key within the transaction. The content of a transaction is all its past transactions and the receiver of Bitcoins.

Mid Review

The contents of the last two sections can be somewhat confusing, so through this mid-review hopefully we can grasp the interaction of a wallet and a transaction if we haven’t yet.

A wallet generates an ECDSA key-paring. The public key runs through some cryptographic steps to output what effectively looks like a Bitcoin address. For simplicity’s sake, let us say this wallet contains 1.0 BTC. We will name our wallet wallet_1 and wallet_1 is going to make a transaction of 0.2 BTC to wallet_2. To do so, wallet_1 will have to create a transaction that looks like the following:

  • Inputs: … (past transactions)
  • Outputs: 0.8 BTC to wallet_1, 0.2 BTC to wallet_2.
  • Signature: Signature of the content.
  • Public Key: The public key of the sender; wallet_1.

This transaction will be sent to a node, who will verify the inputs with the ledger, the validity of the signature/public-key/content, and who will place the transaction within a block (unverified transactions) that will get sent for mining (this process will be explained under mining). Once that transaction is mined and is placed within the verified transaction list (the blockchain), wallet_1 is known within the network to have 0.8 BTC and wallet_2 is now known to have 0.2 BTC. wallet_2 will now send wallet_3 0.1 BTC:

  • Inputs: 0.2 BTC from wallet_1.
  • Outputs: 0.1 BTC to wallet_2, 0.1 BTC to wallet_3.
  • Signature: Signature of the content.
  • Public Key: The public key of the sender; wallet_2.

And this process will continue on forever. These transactions, keep in mind, are stored within the public ledger called ‘the blockchain’. This means that every coin can be traced back to its origin, and when a transaction is sent to a node for verification a node can effectively say: “Hey, your inputs are not valid!” or “Hey, your signature does not match your public key! You don’t own these Bitcoins!” (with that level of enthusiasm, I like to believe).


One of the most crucial parts to Bitcoin is mining. Miners will pull unverified transactions based on their criteria from nodes. By combining this list of unverified transaction and the last blocked that was mined, miners will hash everything together to a small 256-bit string, or set of characters.

This 256-bit string is then placed together with digital trash (random data), simply called ‘nonce’… and this is where the hashing game begins. The miner’s job is to now discover a nonce that when placed with the unverified transaction and the last block, will generate a hash with a specific amount of leading zeros.

The amount of leading zeros is what creates the ‘difficulty’ within the network, and at the time of this article this was the hash of the last block mined:


Keep in mind that due to the avalanche effect, a simple change within the nonce of the data will completely change the outputting hash. For all those leading zeros to have been found, billions of hashes with different nonces had to be tried. When the proper nonce is found, however, the network allows the miner to create one transaction at a limit of 12.5 BTC within that block; and that’s exactly what a miner will do. The miner will reward itself by creating a 12.5 BTC transaction to, well, itself.

Nodes also hold the unverified transactions, which miners will pull from. This means that miners are on a timetable. If a miner is able to successfully mine a block, that block is officially added to the blockchain (the ledger), and it can no longer be mined. This means other miners have to disregard their work, and begin mining the next block.

When the Bitcoin network was first created the reward for each block mined was at 50 BTC. Hard-coded to Bitcoin, after 210,000 blocks had been mined (approximately 4 years) the network dropped the reward to 25, and after 420,000 blocks the network dropped the reward to 12.5. This will continue on until the reward is effectively zero. This implementation was done so the Bitcoin network never surpasses 20999999.9769 BTC or the current value of $22,760,849,974.96.


So now that we have all the pieces to the puzzle, we can go ahead and place the puzzle completely together. Let’s quickly review all aspects explained above:

  • Peer-to-peer allows the downloading and uploading of files without the hassle of a main centralized server. It allows a file to be kept and downloaded from multiple machines.
  • Hashing allows any input to become an irreversible output. This allows passwords, for example, to be stored in a safe manner. Hash a password and store it. When a user attempts to access your website, all one has to do is hash their password and send it to the server as proof that they are the user they claim to be.
  • Signature algorithm allows a file or string to be signed by a private key, and verified by a linking public key, effortlessly proving that the private key signed the original message.
  • Network and roles include the user, nodes, and miners. Users are everyday individuals simply sending and receiving Bitcoins. Nodes relay messages around the network, verify transactions, and keep the full ledger of the Bitcoin network. Miners effectively place unverified transactions (blocks) into the blockchain by completing a hash game.
  • Wallets are what a user will utilize to send and receive transactions. Wallets sign transactions before sending them to a node to be verified, which eventually is picked up by a miner to be mined, and eventually back to the nodes to be considered confirmed.
  • Transactions are sent by a user’s wallet to a node. They include the input(s) of the Bitcoin being sent (previous transactions that led to this Bitcoin), and the output (receiving address(es)).
  • Mining is when a miner successfully mines the block through the hash game and is reward how much ever Bitcoin the network allows at the time.

Now that we have all the information above, I can tell you what Bitcoin is in one paragraph:

Bitcoin is a peer-to-peer network, effectively making it decentralized, that utilizes nodes to keep the full ledger of the network, verify transactions, and broadcast messages. Miners generate new Bitcoins through a hashing game, and users use and give value to Bitcoins. Through the use of signature algorithm, a wallet can generate transactions that are verified by nodes, mined by miners, and stored in the ledger by nodes. Hashing allows a simple way to keep track of previous transactions, previous blocks, and creating new Bitcoins.

Linux ldconfig Command Examples

What is ldconfig?

ldconfig is used to create, udpate and remove symbolic links for the current shared libraries based on the lib directories present in the /etc/ld.so.conf

3 ldconfig Examples

1. Display current libraries from the cache

This displays the list of directories and the libraries that are stored in the current cache. In the following example, it indicates that there are 916 libraries found in the cache file /etc/ld.so.cache, and it lists all of them below.

# ldconfig -p | head -5
916 libs found in cache `/etc/ld.so.cache'
	libzephyr.so.4 (libc6) => /usr/lib/libzephyr.so.4
	libzbar.so.0 (libc6) => /usr/lib/libzbar.so.0
	libz.so.1 (libc6) => /lib/libz.so.1
	libz.so (libc6) => /usr/lib/libz.so

2. Display libraries from every directory

Scans all the directories, and prints the directory name, and all the links that are created under it.

# ldconfig -v | head
	libGL.so.1 -> libGL.so.1.2
	liblouis.so.2 -> liblouis.so.2.2.0
	libasound_module_ctl_oss.so -> libasound_module_ctl_oss.so
	libasound_module_ctl_bluetooth.so -> libasound_module_ctl_bluetooth.so
	libasound_module_pcm_bluetooth.so -> libasound_module_pcm_bluetooth.so
	libasound_module_pcm_vdownmix.so -> libasound_module_pcm_vdownmix.so
	libasound_module_rate_speexrate.so -> libasound_module_rate_speexrate_medium.so

The /etc/ld.so.conf has an include statement, which indicates that all the *.conf file under /etc/ld.so.conf.d directory should be considered.

# cat /etc/ld.so.conf
include /etc/ld.so.conf.d/*.conf

As you see below, there are multiple *.conf file located under this ld.so.conf.d directory. All of these files will be used.

# ls -1 /etc/ld.so.conf.d/

Sometimes when you do ldconfig -v, you might get the following error. This is because the directory referred by some of the *.conf file located under /etc/ld.so.conf.d is not valid, and contains directory names that doesn’t exist.

/sbin/ldconfig.real: Can't stat /lib/i486-linux-gnu: No such file or directory
/sbin/ldconfig.real: Can't stat /usr/lib/i486-linux-gnu: No such file or directory
/sbin/ldconfig.real: Can't stat /lib/i686-linux-gnu: No such file or directory
/sbin/ldconfig.real: Can't stat /lib64: No such file or directory

Note: You can either ignore these error mesages are remove those *.conf files from the /etc/ld.so.conf.d directory.

3. Inform System about the New Libraries

If you’ve installed a new program by compiling it from source, you might want to inform the system about the new libraries.

For example, let us assume that you’ve installed a program called dummy, which has all it’s libraries under /opt/dummy/lib directory.

The following example will update the links using only the directory /opt/dummy/lib. This doesn’t rebuilt the links by processing the /etc/ld.so.conf file. Please note that this doesn’t rebuild the cache. It just updates the link.

# ldconfig -n /opt/dummy/lib

Instead of the above, you can also add the “/opt/dummy/lib” to /etc/ld.so.conf and do the following.

# vi /etc/ld.so.conf

# ldconfig

Syntax and Options


ldconfig [OPTION...]
Short Option Long Option Option Description
-v –verbose Indicates verbose mode. Prints current version number, name of each directory as it is scanned and links that are created.
-n Process the directories that are specified from the command line. This doesn’t process the regular /usr/lib and lib directories. This also doesn’t process directories specified in the /etc/ld.so.conf. This option implies -N.
-N This doesn’t rebuild the cache. Unless -X is also specified, links are still updated.
-X This doesn’t update the links. Unless -N is also specified, the cache is still rebuilt.
-f Use the specified config file instead of /etc/ld.so.conf.
-C Use the specified cache instead of /etc/ld.so.cache.
-r Change to and use root as the root directory.
-l This is library mode, which manually links individual libraries.
-p –print-cache Print the lists of directories and candidate libraries stored in the current cache.
-c FORMAT –format=FORMAT Uses FORMAT for the cache file. Valid values for FORMAT: old, new and compat. compat is the default value.
-i –ignore-aux-cache Ignore auxiliary cache file.
-? –help, –usage Display help
-V –version Display version number

An Easy Way to Hide Files and Directories in Linux

Do you occasionally share your Linux desktop machine with family members, friends or perhaps with colleagues at your workplace, then you have a reason to hide certain private files as well as folders or directories. The question is how can you do this?

In this tutorial, we will explain an easy and effective way to hide files and directories and view hidden files/directories in Linux from the terminal and GUI.

As we’ll see below, hiding files and directories in Linux is so simple.

How to Hide Files and Directories in Linux

To hide a file or directory from the terminal, simply append a dot . at the start of its name as follows using the mv command.

$ ls
$ mv mv sync.ffs_db .sync.ffs_db
$ ls

Hide File in Linux Terminal

Using GUI method, the same idea applies here, just rename the file by adding a . at the start of its name as shown below.

Hide File in Linux Using File Manager

Once you have renamed it, the file will still be seen, move out of the directory and open it again, it will be hidden thereafter.

How to View Hide Files and Directories in Linux

To view hidden files, run the ls command with the -a flag which enables viewing of all files in a directory or -al flag for long listing.

$ ls -a
$ ls -al

View Hidden Files in Linux Terminal

From a GUI file manager, go to View and check the option Show Hidden Files to view hidden files or directories.

View Hidden File Using File Manager

How to Compress Files and Directories with a Password

In order to add a little security to your hidden files, you can compress them with a password and then hide them from a GUI file manager as follows.

Select the file or directory and right click on it, then choose Compress from the menu list, after seeing the compression preferences interface, click on “Other options” to get the password option as shown in the screenshot below.

Once you have set the password, click on Create.

Compress Files with Password in Linux

From now on, each time anyone wants to open the file, they’ll be asked to provide the password created above.

Enter Password to View Files

Now you can hide the file by renaming it with a . as we explained before.

pyDash – A Web Based Linux Performance Monitoring Tool

pydash is a lightweight web-based monitoring tool for Linux written in Python and Django plus Chart.js. It has been tested and can run on the following mainstream Linux distributions: CentOS, Fedora, Ubuntu, Debian, Arch Linux, Raspbian as well as Pidora.

You can use it to keep an eye on your Linux PC/server resources such as CPUs, RAM, network stats, processes including online users and more. The dashboard is developed entirely using Python libraries provided in the main Python distribution, therefore it has a few dependencies; you don’t need to install many packages or libraries to run it.

In this article, we will show you how to install pydash to monitor Linux server performance.

How to Install pyDash in Linux System

1. First install required packages: git and Python pip as follows:

-------------- On Debian/Ubuntu -------------- 
$ sudo apt-get install git python-pip
-------------- On CentOS/RHEL -------------- 
# yum install epel-release
# yum install git python-pip
-------------- On Fedora 22+ --------------
# dnf install git python-pip

2. If you have git and Python pip installed, next, install virtualenv which helps to deal with dependency issues for Python projects, as below:

# pip install virtualenv
$ sudo pip install virtualenv

3. Now using git command, clone the pydash directory into your home directory like so:

# git clone https://github.com/k3oni/pydash.git
# cd pydash

4. Next, create a virtual environment for your project called pydashtest using the virtualenv command below.

$ virtualenv pydashtest #give a name for your virtual environment like pydashtest

Create Virtual Environment

Important: Take note the virtual environment’s bin directory path highlighted in the screenshot above, yours could be different depending on where you cloned the pydash folder.

5. Once you have created the virtual environment (pydashtest), you must activate it before using it as follows.

$ source /home/aaronkilik/pydash/pydashtest/bin/activate

Active Virtual Environment

From the screenshot above, you’ll note that the PS1 prompt changes indicating that your virtual environment has been activated and is ready for use.

6. Now install the pydash project requirements; if you are curious enough, view the contents of requirements.txt using the cat command and the install them using as shown below.

$ cat requirements.txt
$ pip install -r requirements.txt

7. Now move into the pydash directory containing settings.py or simple run the command below to open this file to change the SECRET_KEY to a custom value.

$ vi pydash/settings.py

Set Secret Key

Save the file and exit.

8. Afterward, run the django command below to create the project database and install Django’s auth system and create a project super user.

$ python manage.py syncdb

Answer the questions below according to your scenario:

Would you like to create one now? (yes/no): yes
Username (leave blank to use 'root'): admin
Email address: aaronkilik@gmail.com
Password: ###########
Password (again): ############

Create Project Database

9. At this point, all should be set, now run the following command to start the Django development server.

$ python manage.py runserver

10. Next, open your web browser and type the URL: to get the web dashboard login interface. Enter the super user name and password you created while creating the database and installing Django’s auth system in step 8 and click Sign In.

pyDash Login Interface

11. Once you login into pydash main interface, you will get a section for monitoring general system info, CPU, memory and disk usage together with system load average.

Simply scroll down to view more sections.

pyDash Server Performance Overview

12. Next, screenshot of the pydash showing a section for keeping track of interfaces, IP addresses, Internet traffic, disk read/writes, online users and netstats.

pyDash Network Overview

13. Next is a screenshot of the pydash main interface showing a section to keep an eye on active processes on the system.

pyDash Active Linux Processes

fswatch – Monitors Files and Directory Changes or Modifications in Linux

fswatch is a cross-platform, file change monitor that gets notification alerts when the contents of the specified files or directories are altered or modified.

It executes four types of monitors on different operating systems such as:

  1. A monitor build on the File System Events API of Apple OS X.
  2. A monitor based on kqueue, a notification interface present in FreeBSD 4.1 also supported on many *BSD systems, OS X inclusive.
  3. A monitor based on File Events Notification API of the Solaris kernel plus its spin-offs.
  4. A monitor based on inotify, a kernel subsystem that shows file system modifications to apps.
  5. A monitor based on ReadDirectoryChangesW, a Windows API that records alters to a directory.
  6. A monitor that regularly check that status of file system, keeps file modification times in memory, and manually determine file system changes (which works anywhere, where stat can be used).

Features of fswatch

  1. Supports several OS-specific APIs
  2. Allows recursive directory monitoring
  3. Performs path filtering using including and excluding regular expressions
  4. Supports customizable record format
  5. Additionally, it supports periodic idle events

How To Install fswatch in Linux Systems

Unfortunately, fswatch package is not available to install from the default system repositories in any Linux distributions. The only way to install the latest version of fswatch is to build from source tarball as show in the following installation instructions.

First grab the latest fswatch tarball using following wget command and install it as shown:

$ wget https://github.com/emcrisostomo/fswatch/releases/download/1.9.3/fswatch-1.9.3.tar.gz
$ tar -xvzf fswatch-1.9.3.tar.gz
$ cd fswatch-1.9.3
$ ./configure
$ make
$ sudo make install 

Important: Make sure you’ve GNU GCC (C and C++ Compiler) and Development Tools (build-essential on Debian/Ubuntu) installed on the system, before you compile fswatch from source. If not, install it using following command on your respective Linux distributions..

# yum group install 'Development Tools'		[On CentOS/RHEL]
# dnf group install 'Development Tools'		[On Fedora 22+ Versions]
$ sudo apt-get install build-essential          [On Debian/Ubuntu Versions]

On Debian/Ubuntu distributions, you might get following error while executing fswatch command..

fswatch: error while loading shared libraries: libfswatch.so.6: cannot open shared object file: No such file or directory

To fix it, you need to execute the command below, this will help refresh the links and cache to the dynamic libraries before you can start using fswatch.

$ sudo ldconfig

How do I use fswatch on Linux?

The general syntax for running fswatch is:

$ fswatch [option] [path]

On Linux, it is recommended that you use the default inotify monitor, you can list available monitors by employing the -M or - list-monitors option:

$ fswatch -M
$ fswatch --list-monitors

fswatch - List Monitors

The command below enables you to watch the changes in the current directory (/home/tecmint), with events being delivered to standard output every 4 seconds.

The -l or –-latency option allows you to set the latency in seconds, the default being 1 second.

$ fswatch -l 4 .

fswatch - Monitor Home Directory Changes

The next command monitors changes to the /var/log/auth.log file every 5 seconds:

$ fswatch -l 5 /var/log/auth.log

Using -t or --timestamp option prints the time stamp for every event, to print the time in UTC format, employ -u or --utf-time option. You can as well format time using -f or --format-time format option:

$ fswatch --timestamp /var/log/auth.log

Next, -x or --event-flags tells fswatch to print the event flags along side the event path. You can use –event-field-seperator option to print events using the particular separator.

$ fswatch --events-flags ~ /var/log/auth.log

To print the numeric value of an event indicating changes in your home directory and /var/log/auth.log file, use -n or --numeric option as below:

$ fswatch --numeric ~ /var/log/auth.log 

Perhaps you can look through the fswatch man page for detailed usage options and information:

$ man fswatch

Pyinotify – Monitor Filesystem Changes in Real-Time in Linux

Pyinotify is a simple yet useful Python module for monitoring filesystems changes in real-time in Linux.

As a System administrator, you can use it to monitor changes happening to a directory of interest such as web directory or application data storage directory and beyond.

It depends on inotify (a Linux kernel feature incorporated in kernel 2.6.13), which is an event-driven notifier, its notifications are exported from kernel space to user space via three system calls.

The purpose of pyinotify is to bind the three system calls, and support an implementation on top of them providing a common and abstract means to manipulate those functionalities.

In this article, we will show you how to install and use pyinotify in Linux to monitor filesystem changes or modifications in real-time.


In order to use pyinotify, your system must be running:

  1. Linux kernel 2.6.13 or higher
  2. Python 2.4 or higher

How to Install Pyinotify in Linux

First start by checking the kernel and Python versions installed on your system as follows:

# uname -r 
# python -V

Once dependencies are met, we will use pip to install pynotify. In most Linux distributions, Pip is already installed if you’re using Python 2 >=2.7.9 or Python 3 >=3.4 binaries downloaded from python.org, otherwise, install it as follows:

# yum install python-pip      [On CentOS based Distros]
# apt-get install python-pip  [On Debian based Distros]
# dnf install python-pip      [On Fedora 22+]

Now, install pyinotify like so:

# pip install pyinotify

It will install available version from the default repository, if you are looking to have a latest stable version of pyinotify, consider cloning it’s git repository as shown.

# git clone https://github.com/seb-m/pyinotify.git
# cd pyinotify/
# ls
# python setup.py install

How to Use pyinotify in Linux

In the example below, I am monitoring any changes to the user tecmint’s home (/home/tecmint) directory as root user (logged in via ssh) as shown in the screenshot:

# python -m pyinotify -v /home/tecmint

Monitor Directory Changes

Next, we will keep a watch for any changes to the web directory (/var/www/html/tecmint.com):

# python -m pyinotify -v /var/www/html/tecmint.com

To exit the program, simply hit [Ctrl+C].

Note: When you run pyinotify without specifying any directory to monitor, the /tmp directory is considered by default.