Wazuh: Issues encountered and solutions

 

  1. logstash service does not find config files in /etc/logstash/conf.d
    I installed logstash via centos rpm and placed a valid logstash configuration file into /etc/logstash/conf.d. Starting the service it fails with following error message:

    {:timestamp=>"2015-10-21T08:11:06.939000+0000", :message=>"Error: No config files found: /etc/logstash/conf.d/pipe.conf\nCan you make sure this path is a logstash config file?", :file=>"logstash/agent.rb", :line=>"159", :method=>"execute"}
    {:timestamp=>"2015-10-21T08:11:06.948000+0000", :message=>"You may be interested in the '--configtest' flag which you can\nuse to validate logstash's configuration before you choose\nto restart a running system.", :file=>"logstash/agent.rb", :line=>"161", :method=>"execute"}
    

    there is definitely a file in this location read/write to anyone

    [root@logserv logstash]# ls -al /etc/logstash/conf.d/
    total 12
    drwxr-xr-x. 2 root root 4096 Oct 21 08:00 .
    drwx------. 4 root root 4096 Oct 21 07:54 ..
    -rwxrwxrwx. 1 root root  234 Oct 21 07:56 pipe.conf

    and its content is valid

    [root@logserv logstash]# bin/logstash -f /etc/logstash/conf.d/pipe.conf --configtest
    Configuration OK

    Solution:

    chown -R logstash:root /etc/logstash/conf.d
    chmod 0750 /etc/logstash/conf.d
    chmod 0640 /etc/logstash/conf.d/*
  2.  If in the Wazuh UI you see data in wazuh-alerts but not in any of the wazuh dashboards, check if the data is getting pushed to Elasticsearch first:

    curl localhost:9200/_cat/indices

    The output should look like below:

    green open wazuh-alerts-3.x-2019.07.19 GIPOTyJuSxSZgVtsdkouxg 3 0 131 0 424.7kb 424.7kb
    green open .kibana_task_manager cCFAzTqIQ6GuhVtJsfuUrQ 1 0 2 0 29.5kb 29.5kb
    yellow open .wazuh tgqyhP1rQHqRk4bnfvjivg 1 1 1 0 11kb 11kb
    green open wazuh-alerts-3.x-2019.07.20 vbSs-0TRRRKihI3vo67C0w 3 0 10 0 79.7kb 79.7kb
    green open wazuh-alerts-3.x-2019.07.21 GYbynBOLTsedyuxIVfSmig 3 0 9 0 80.7kb 80.7kb
    green open .kibana_1 24p2awqCTFafufPXuTkM_A 1 0 6 2 110.6kb 110.6kb
    green open wazuh-monitoring-3.x-2019.07.18 GtPTclhVS6CveIoTB9s88w 2 0 192 0 174.7kb 174.7kb
    green open wazuh-monitoring-3.x-2019.07.20 skX7aKIMTNa20VKvZdG-gg 2 0 192 0 210.9kb 210.9kb
    green open wazuh-monitoring-3.x-2019.07.17 fERZ9LMeQheBUDo4CFZgbw 2 0 98 0 215.1kb 215.1kb
    green open wazuh-monitoring-3.x-2019.07.21 fDT71M7bSNawPplieIEXRg 2 0 46 0 208.6kb 208.6kb
    yellow open .wazuh-version 2TPSH17YQ4e_n6NiWDhQqQ 1 1 1 0 5.2kb 5.2kb
    green open wazuh-monitoring-3.x-2019.07.19 8RDIhk0EQIOxNIYOOh6VXA 2 0 198 0 140.1kb 140.1kb

    Check if wazuh-alerts-3.x-* index is present with current date. If yes, then check if data is present in the index:

    curl localhost:9200/<INDEX_NAME>/_search?pretty=true&size=1000
    Example:
    localhost:9200/wazuh-alerts-3.x-2019.07.19/_search?pretty=true&size=1000

    This should return first 1000 entries.
    Check if latest entries are present. Also check if manager.name matches the hostname of the manager. If not, then change the hostname of the manager by executing command:

    hostname <HOSTNAME>
    Example:
    hostname abc.example.com

    The alerts in the Elasticsearch index will start coming in with the manager.hostname as abc.example.com

  3. Get details of a Wazuh agent:

    curl -u foo:bar localhost:55000/agents/<AGENT_ID>

    foo:bar is the credentials for wazuh api.

  4. Get details of nodes in Wazuh cluster:
    curl -u foo:bar localhost:55000/cluster/nodes?pretty
  5. Rules not getting reflected aftere change in /var/ossec/ruleset/rules
    Restarting the wazuh-manger should reload the rules:

    systemctl restart wazuh-manager

  6. No events in wazuh-monitoring index:
    Check if index wazuh-monitoring-3.x-* is present with today’s date:

    curl elastic:9200/_cat/indices/wazuh-monitoring*

    Check if there is any error in wazuhapp for wazuh-monitoring:

    cat /usr/share/kibana/optimize/wazuh-logs/wazuhapp-plain.log | grep monitoring

    Execute:

    curl -XGET "http://elastic:9200/_cat/templates/wazuh-agent"

    If you get something like:
    wazuh-agent [wazuh-monitoring*, wazuh-monitoring-3.x-*] 0
    You probably have a template issue. Execute the following to resolve it:

    Stop Kibana: systemctl stop kibana
    
    Delete the monitoring template, curl -XDELETE elastic:9200/_template/wazuh-agent
    
    Restart Kibana: systemctl restart kibana, it should insert the monitoring template and the Kibana UI should start working shortly
  7. Change wazuh app to debug mode:
    Edit /usr/share/kibana/plugins/wazuh/config.yml
    Replace #logs.level: info with logs.level: debug, then restart Kibana service (systemctl restart kibana)
  8. Wazuh UI Error: “Saved field parameter is now invalid” OR “Error in Visualisation: field is a required parameter”
    You will require a cleanup. Execute the following commands:
    Delete wazuh-alerts-3.x-* index with today’s date:

    curl -XDELETE localhost:9200/wazuh-alerts-3.x-2019.07.18
    systemctl restart wazuh-manager
    curl -XDELETE localhost:9200/.kiban*
    systemctl restart kibana
    rm -f /var/ossec/queue/db/0*
    systemctl restart wazuh-manager
  9. Generate SCA alerts:
    rm -f /var/ossec/queue/db/0*
    systemctl restart wazuh-manager
  10. Error in visualisation: Expected numeric type on field on data.sca.score got numeric:
    You most probably have wrong template. Just install the template according to your wazuh version from their github repo. To install latest (3.9.3) execute the following:

    curl https://raw.githubusercontent.com/wazuh/wazuh/v3.9.3/extensions/elasticsearch/6.x/wazuh-template.json | curl -X PUT "http://localhost:9200/_template/wazuh" -H 'Content-Type: application/json' -d @-
  11. java.lang.IllegalArgumentException: Rejecting mapping update to [wazuh-alerts-3.x] as the final mapping would have more than 1 type:
    This would most possibly mean you have wrong template. Install the latest one with above step.
    If already done you might have an issue with your logstash configuration:

    curl -so /etc/logstash/conf.d/01-wazuh.conf https://raw.githubusercontent.com/wazuh/wazuh/v3.9.3/extensions/logstash/7.x/01-wazuh-remote.conf

    Delete wazuh-alerts-3.x-* index with today’s date:

    curl -XDELETE localhost:9200/wazuh-alerts-3.x-2019.07.18systemctl restart wazuh-manager
    curl -XDELETE localhost:9200/.kiban*
    systemctl restart kibana
    rm -f /var/ossec/queue/db/0*
    systemctl restart wazuh-manager
  12. Version mismatch:
    If you get the above error on Wazuh UI, execute the following commands:

    service kibana stop
    curl -XDELETE localhost:9200/.wazuh
    curl -XDELETE localhost:9200/.wazuh_version
    service kibana start
  13. Other useful commands:
    Check documents in an index:
    curl elastic:9200/_cat/indices/wazuh-monitoring*
    
    Check wazuhapp logs:
    cat /usr/share/kibana/optimize/wazuh-logs/wazuhapp-plain.log | grep monitoring
    
    Check wazuh config:
    cat /usr/share/kibana/plugins/wazuh/config.yml
    
    Get Wazuh id:
    curl elastic:9200/.wazuh/_search?pretty -s | grep "_id"
    
    Templates:
    curl elastic:9200/_cat/templates/wazuh
    
    Version:
    cat /usr/share/kibana/plugins/wazuh/package.json | grep version
    cat /etc/ossec-init.conf | grep -i version
    curl -u foo:bar localhost:55000/version
    
    Monitoring settings:
    cat /usr/share/kibana/plugins/wazuh/config.yml | grep monitoring
    
    Search monitoring index:
    curl elastic:9200/wazuh-monitoring*/_search
    
    List of agents:
    curl -u foo:bar "localhost:55000/agents?q=id!=000"
    
    Index settings:
    curl  "elastic:9200/wazuh-monitoring-3.x-2019.07.17/_settings"
    
    Get details of template:
    curl -XGET "http://elastic:9200/_cat/templates/wazuh-agent"
    
    Check if filebeat is configured correctly:
    filebeat test output
Advertisements

Install and configure Wazuh with ELK 6.x

Wazuh helps you to gain deeper security visibility into your infrastructure by monitoring hosts at an operating system and application level. This solution, based on lightweight multi-platform agents, provides the following capabilities:

  • File integrity monitoring

Wazuh monitors the file system, identifying changes in content, permissions, ownership, and attributes of files that you need to keep an eye on.

 

  • Intrusion and anomaly detection

Agents scan the system looking for malware, rootkits or suspicious anomalies. They can detect hidden files, cloaked processes or unregistered network listeners, as well as inconsistencies in system call responses.

 

  • Automated log analysis

Wazuh agents read operating system and application logs, and securely forward them to a central manager for rule-based analysis and storage. The Wazuh rules help make you aware of application or system errors, misconfigurations, attempted and/or successful malicious activities, policy violations and a variety of other security and operational issues.

 

  • Policy and compliance monitoring

Wazuh monitors configuration files to ensure they are compliant with your security policies, standards and/or hardening guides. Agents perform periodic scans to detect applications that are known to be vulnerable, unpatched, or insecurely configured.

This diverse set of capabilities is provided by integrating OSSEC, OpenSCAP and Elastic Stack into a unified solution and simplifying their configuration and management.

Execute the following commands to install and configure Wazuh:

  1. apt-get update

  2. apt-get install curl apt-transport-https lsb-release gnupg2

  3. curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | apt-key add –

  4. echo “deb https://packages.wazuh.com/3.x/apt/ stable main” | tee -a /etc/apt/sources.list.d/wazuh.list

  5. apt-get update

  6. apt-get install wazuh-manager

  7. systemctl status wazuh-manager

  8. curl -sL https://deb.nodesource.com/setup_8.x | bash –

  9. apt-get install gcc g++ make

  10. apt-get install -y nodejs

  11. curl -sL https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add –

  12. echo “deb https://dl.yarnpkg.com/debian/ stable main” | sudo tee /etc/apt/sources.list.d/yarn.list

  13. sudo apt-get update && sudo apt-get install yarn

  14. apt-get install nodejs

  15. apt-get install wazuh-api

  16. systemctl status wazuh-api

  17. sed -i “s/^deb/#deb/” /etc/apt/sources.list.d/wazuh.list

  18. apt-get update

  19. curl -s https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add –

  20. echo “deb https://artifacts.elastic.co/packages/6.x/apt stable main” | tee /etc/apt/sources.list.d/elastic-6.x.list

  21. apt-get update

  22. apt-get install filebeat

  23. curl -so /etc/filebeat/filebeat.yml https://raw.githubusercontent.com/wazuh/wazuh/v3.9.3/extensions/filebeat/6.x/filebeat.yml

  24. Edit the file /etc/filebeat/filebeat.yml and replace  YOUR_ELASTIC_SERVER_IP with the IP address or the hostname of the Logstash server.

  25. apt search elasticsearch

  26. apt-get install elasticsearch

  27. systemctl daemon-reload

  28. systemctl enable elasticsearch.service

  29. systemctl start elasticsearch.service

  30. curl https://raw.githubusercontent.com/wazuh/wazuh/v3.9.3/extensions/elasticsearch/6.x/wazuh-template.json | curl -X PUT “http://localhost:9200/_template/wazuh&#8221; -H ‘Content-Type: application/json’ -d @-

  31. curl -X PUT “http://localhost:9200/*/_settings?pretty&#8221; -H ‘Content-Type: application/json’ -d’
    “settings”: {
    “number_of_replicas” : 0
    }

  32. sed -i ‘s/#bootstrap.memory_lock: true/bootstrap.memory_lock: true/’ /etc/elasticsearch/elasticsearch.yml

  33. sed -i ‘s/^-Xms.*/-Xms12g/;s/^-Xmx.*/-Xmx12g/’ /etc/elasticsearch/jvm.options

  34. mkdir -p /etc/systemd/system/elasticsearch.service.d/

  35. echo -e “[Service]\nLimitMEMLOCK=infinity” > /etc/systemd/system/elasticsearch.service.d/elasticsearch.conf

  36. systemctl daemon-reload

  37. systemctl restart elasticsearch

  38. apt-get install logstash

  39. curl -so /etc/logstash/conf.d/01-wazuh.conf https://raw.githubusercontent.com/wazuh/wazuh/master/extensions/logstash/6.x/01-wazuh-local.conf

  40. systemctl daemon-reload

  41. systemctl enable logstash.service

  42. systemctl start logstash.service

  43. systemctl status filebeat

  44. systemctl start filebeat

  45. apt-get install kibana

  46. export NODE_OPTIONS=”–max-old-space-size=3072″

  47. sudo -u kibana /usr/share/kibana/bin/kibana-plugin install https://packages.wazuh.com/wazuhapp/wazuhapp-3.9.3_6.8.1.zip

  48. Kibana will only listen on the loopback interface (localhost) by default. To set up Kibana to listen on all interfaces, edit the file /etc/kibana/kibana.yml uncommenting the setting server.host. Change the value to:
    server.host: “0.0.0.0”

  49. systemctl enable kibana.service

  50. systemctl start kibana.service

  51. cd /var/ossec/api/configuration/auth

  52. Create a username and password for Wazuh API. When prompted, enter the password:
    node htpasswd -c user admin

  53. systemctl restart wazuh-api

Then in the agent machine execute the following commands:

  1. apt-get install curl apt-transport-https lsb-release gnupg2
  2. curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | apt-key add –
  3. echo “deb https://packages.wazuh.com/3.x/apt/ stable main” | tee /etc/apt/sources.list.d/wazuh.list
  4. apt-get update
  5. You can automate the agent registration and configuration using variables. It is necessary to define at least the variable WAZUH_MANAGER_IP. The agent will use this value to register and it will be the assigned manager for forwarding events.
    WAZUH_MANAGER_IP=“10.0.0.2” apt-get install wazuh-agent
  6. sed -i “s/^deb/#deb/” /etc/apt/sources.list.d/wazuh.list
  7. apt-get update

In this section, we’ll register the Wazuh API (installed on the Wazuh server) into the Wazuh App in Kibana:

  1. Open a web browser and go to the Elastic Stack server’s IP address on port 5601 (default Kibana port). Then, from the left menu, go to the Wazuh App.

    ../../_images/kibana_app.png

  1. Click on Add new API.
    ../../_images/connect_api.png


  2. Fill Username and Password with appropriate credentials you created in previous step. Enter http://MANAGER_IP for the URL, where MANAGER_IP is the real IP address of the Wazuh qserver. Enter “55000” for the Port.

    ../../_images/fields_api.png
  3. Click on Save.

    ../../_images/app_running.png

Track down vulnerable applications

Of the many software packages installed on your Red Hat, CentOS, and/or Ubuntu systems, which ones have known vulnerabilities that might impact your security posture? Wazuh helps you answer this question with the syscollector and vulnerability-detector modules. On each agent, syscollector can scan the system for the presence and version of all software packages. This information is submitted to the Wazuh manager where it is stored in an agent-specific database for later assessment. On the Wazuh manager, vulnerability-detector maintains a fresh copy of the desired CVE sources of vulnerability data, and periodically compares agent packages with the relevant CVE database and generates alerts on matches.

In this lab, we will configure syscollector to run on wazuh-server and on both of the Linux agents. We will also configurevulnerability-detector on wazuh-server to periodically scan the collected inventory data for known vulnerable packages. We will observe relevant log messages and vulnerability alerts in Kibana including a dashboard dedicated to this. We will also interact with the Wazuh API to more deeply mine the inventory data, and even take a look at the databases where it is stored.

Configure syscollector for the Linux agents

In /var/ossec/etc/shared/linux/agent.conf on wazuh-server, just before the open-scap wodle configuration section, insert the following so each Linux agent will scan itself.

<wodle name="syscollector">
  <disabled>no</disabled>
  <interval>1d</interval>
  <scan_on_start>yes</scan_on_start>
  <hardware>yes</hardware>
  <os>yes</os>
  <packages>yes</packages>
</wodle>

Run verify-agent-conf to confirm no errors were introduced into agent.conf.

Configure vulnerability-detector and syscollector on wazuh-server

In ossec.conf on wazuh-server, just before the open-scap wodle configuration section, insert the following so that it will inventory its own software plus scan all collected software inventories against published CVEs, alerting where there are matches:

<wodle name="vulnerability-detector">
  <disabled>no</disabled>
  <interval>5m</interval>
  <run_on_start>yes</run_on_start>
  <feed name="ubuntu-18">
    <disabled>no</disabled>
    <update_interval>1h</update_interval>
  </feed>
</wodle>

Restart the Wazuh manager. This will also cause the agents to restart as they pick up their new configuration:

  1. For Systemd:

systemctl restart wazuh-manager

Look at the logs

The vulnerability-detector module generates logs on the manager, and syscollector does as well on the manager and agents.

Try grep syscollector: /var/ossec/logs/ossec.log on the manager and on an agent:

2018/02/23 00:55:33 wazuh-modulesd:syscollector: INFO: Module started.
2018/02/23 00:55:34 wazuh-modulesd:syscollector: INFO: Starting evaluation.
2018/02/23 00:55:35 wazuh-modulesd:syscollector: INFO: Evaluation finished.

and try grep vulnerability-detector: /var/ossec/logs/ossec.log on the manager

2018/02/23 00:55:33 wazuh-modulesd:vulnerability-detector: INFO: (5461): Starting Red Hat Enterprise Linux 7 DB update...
2018/02/23 00:55:33 wazuh-modulesd:vulnerability-detector: INFO: (5452): Starting vulnerability scanning.
2018/02/23 00:55:33 wazuh-modulesd:vulnerability-detector: INFO: (5453): Vulnerability scanning finished.

See the alerts in Kibana

Search Kibana for location:"vulnerability-detector" AND data.vulnerability.severity:"High", selecting some of the more helpful fields for viewing like below:

Expand one of the records to see all the information available:

Look deeper with the Wazuh API:

Up to now we have only seen the Wazuh API enable the Wazuh Kibana App to interface directly with the Wazuh manager. However, you can also access the API directly from your own scripts or from the command line with curl. This is especially helpful here as full software inventory data is not stored in Elasticsearch or visible in Kibana – only the CVE match alerts are. The actual inventory data is kept in agent-specific databases on the Wazuh manager. To see that, plus other information collected by syscollector, you can mine the Wazuh API. Not only are software packages inventoried, but basic hardware and operating system data is also tracked.

  1. Run agent_control -l on wazuh-server to list your agents as you will need to query the API by agent id number:
Wazuh agent_control. List of available agents:
  ID: 000, Name: wazuh-server (server), IP: localhost, Active/Local
  ID: 001, Name: linux-agent, IP: any, Active
  ID: 002, Name: elastic-server, IP: any, Active
  ID: 003, Name: windows-agent, IP: any, Active
  1. On wazuh-server, query the Wazuh API for scanned hardware data about agent 002.
# curl -u wazuhapiuser:wazuhlab -k -X GET "https://localhost:55000/syscollector/002/hardware?pretty"

The results should look like this:

{
  "error": 0,
  "data": {
      "board_serial": "unknown",
      "ram": {
        "total": 8009024,
        "free": 156764
      },
      "cpu": {
        "cores": 2,
        "mhz": 2400.188,
        "name": "Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz"
      },
      "scan": {
        "id": 1794797325,
        "time": "2018/02/18 02:05:31"
      }
  }
}
  1. Next, query the Wazuh API for scanned OS data about agent 002.
# curl -u wazuhapiuser:wazuhlab -k -X GET "https://localhost:55000/syscollector/002/os?pretty"

The results should look like this:

{
  "error": 0,
  "data": {
      "sysname": "Linux",
      "version": "#1 SMP Thu Jan 25 20:13:58 UTC 2018",
      "architecture": "x86_64",
      "scan": {
        "id": 1524588903,
        "time": "2018/02/23 01:12:21"
      },
      "release": "3.10.0-693.17.1.el7.x86_64",
      "hostname": "elastic-server",
      "os": {
        "version": "7 (Core)",
        "name": "CentOS Linux"
      }
  }
}
  1. You can also query the software inventory data in many ways. Let’s list the versions of wget on all of our Linux systems:
# curl -u wazuhapiuser:wazuhlab -k -X GET "https://localhost:55000/syscollector/packages?pretty&search=wget"

mapper_parsing_exception from logstash

I am receiving the following logstash errors after installing X-Pack in logstash:
[2018-06-14T13:24:40,458][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"wazuh-alerts-3.x-2018.06.14", :_type=>"wazuh", :_routing=>nil}, #<LogStash::Event:0x31b18a15>], :response=>{"index"=>{"_index"=>"wazuh-alerts-3.x-2018.06.14", "_type"=>"wazuh", "_id"=>"kZ54_mMB86eT4RWzM1CD", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [host]", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:114"}}}}}
The problem is Filebeat 6.3.0 is adding a new field by itself named “host” but we have the field “hostname” so “host” is not needed.
Regarding to Elasticsearch we don’t have that field in our template but we don’t want that field so the possible solution is to modify
the Logstash configuration (mutate section):
mutate { remove_field => [ "timestamp", "beat", "input_type", "tags", "count", "@version", "log", "offset", "type","@src_ip", "host" ]}

Ubuntu 18.04: switch back to /etc/network/interfaces

Starting sometime around Ubuntu 18.04, the Ubuntu devs stopped using the classic /etc/init.d/networking and /etc/network/interfaces method of configuring the network and switched to some thing called netplan. This has made a lot of people very angry and been widely regarded as a bad move. Is it possible to remove netplan and use the correct /etc/network/interfaces method for configuring the network?

The following procedure works for Ubuntu 18.04 (Bionic Beaver)

I. Reinstall the ifupdown package:

# apt-get update
# apt-get install ifupdown

II. Configure your /etc/network/interfaces file with configuration stanzas such as:

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

allow-hotplug enp0s3
auto enp0s3
iface enp0s3 inet static
  address 192.168.1.133
  netmask 255.255.255.0
  broadcast 192.168.1.255
  gateway 192.168.1.1
  # Only relevant if you make use of RESOLVCONF(8)
  # or similar...
  dns-nameservers 1.1.1.1 1.0.0.1

III. Make the configuration effective (no reboot needed):

# ifdown --force enp0s3 lo && ifup -a
# systemctl unmask networking
# systemctl enable networking
# systemctl restart networking

IV. Disable and remove the unwanted services:

# systemctl stop systemd-networkd.socket systemd-networkd \
networkd-dispatcher systemd-networkd-wait-online
# systemctl disable systemd-networkd.socket systemd-networkd \
networkd-dispatcher systemd-networkd-wait-online
# systemctl mask systemd-networkd.socket systemd-networkd \
networkd-dispatcher systemd-networkd-wait-online
# apt-get --assume-yes purge nplan netplan.io

Then, you’re done.

Note: You MUST, of course, adapt the values according to your system (network, interface name…).

V. DNS Resolver

Because Ubuntu Bionic Beaver (18.04) make use of the DNS stub resolver as provided by SYSTEMD-RESOLVED.SERVICE(8), you SHOULD also add the DNS to contact into the /etc/systemd/resolved.conf file. For instance:

....
DNS=1.1.1.1 1.0.0.1
....

and then restart the systemd-resolved service once done:

# systemctl restart systemd-resolved

The DNS entries in the ifupdown INTERFACES(5) file, as shown above, are only relevant if you make use of RESOLVCONF(8) or similar.

How to Add Memory, vCPU, Hard Disk to Linux KVM Virtual Machine

n this example, let us increase the memory of myRHELVM1’s VM from 2GB to 4GB.

First, shutdown the VM using virsh shutdown as shown below:

# virsh shutdown myRHELVM1
Domain myRHELVM1 is being shutdown

Next, edit the VM using virsh edit:

# virsh edit myRHELVM1

Look for the below line and change the value for memory to the following. In my example, earlier it was 2097152:

<memory unit='KiB'>4194304</memory>

Please note that the above value is in KB. After making the change, save and exit:

# virsh edit myRHELVM1
Domain myRHELVM1 XML configuration edited.

Restart the VM with the updated configuration file. Now you will see the max memory increased from 2G to 4G.

You can now dynamically modify the VM memory upto the 4G max limit.

Create the Domain XML file using virsh create

# virsh create /etc/libvirt/qemu/myRHELVM1.xml
Domain myRHELVM1 created from /etc/libvirt/qemu/myRHELVM1.xml

View the available Memory for this domain. As you see below, even though the maximum available memory is 4GB, this domain only has 2GB (Used memory).

# virsh dominfo myRHELVM1 | grep memory
Max memory:     4194304 KiB
Used memory:    2097152 KiB

Set the memory for this domain to 4GB using virsh setmem as shown below:

# virsh setmem myRHELVM1 4194304

Now, the following indicates that we’ve allocated 4GB (Used memory) to this domain.

# virsh dominfo myRHELVM1 | grep memory
Max memory:     4194304 KiB
Used memory:    4194304 KiB

2. Add VCPU to VM

To increase the virtual CPU that is allocated to the VM, do virsh edit, and change the vcpu parameter as explained below.

In this example, let us increase the memory of myRHELVM1’s VM from 2GB to 4GB.

First, shutdown the VM using virsh shutdown as shown below:

# virsh shutdown myRHELVM1
Domain myRHELVM1 is being shutdown

Next, edit the VM using virsh edit:

# virsh edit myRHELVM1

Look for the below line and change the value for vcpu to the following. In my example, earlier it was 2.

<vcpu placement='static'>4</vcpu>

Create the Domain XML file using virsh create

# virsh create /etc/libvirt/qemu/myRHELVM1.xml
Domain myRHELVM1 created from /etc/libvirt/qemu/myRHELVM1.xml

View the virtual CPUs allocated to this domain as shown below. This indicates that we’ve increased the vCPU from 2 to 4.

# virsh dominfo myRHELVM1 | grep -i cpu
CPU(s):         4
CPU time:       21.0s

3. Add Disk to VM

In this example, we have only two virtual disks (vda1 and vda2) on this VM.

# fdisk -l | grep vd
Disk /dev/vda: 10.7 GB, 10737418240 bytes
/dev/vda1   *           3        1018      512000   83  Linux
/dev/vda2            1018       20806     9972736   8e  Linux LVM

There are two steps involved in creating and attaching a new storage device to Linux KVM guest VM:

  • First, create a virtual disk image
  • Attach the virtual disk image to the VM

Let us create one more virtual disk and attach it to our VM. For this, we first need to create a disk image file using qemu-img create command as shown below.

In the following example, we are creating a virtual disk image with 7GB of size. The disk images are typically located under /var/lib/libvirt/images/ directory.

# cd /var/lib/libvirt/images/

# qemu-img create -f raw myRHELVM1-disk2.img 7G
Formatting 'myRHELVM1-disk2.img', fmt=raw size=7516192768

To attach the newly created disk image, use the virsh attach-disk command as shown below:

# virsh attach-disk myRHELVM1 --source /var/lib/libvirt/images/myRHELVM1-disk2.img --target vdb --persistent
Disk attached successfully

The above virsh attach-disk command has the following parameters:

  • myRHELVM1 The name of the VM
  • –source The full path of the source disk image. This is the one that we created using qemu-image command above. i.e: myRHELVM1-disk2.img
  • –target This is the device mount point. In this example, we want to attach the given disk image as /dev/vdb. Please note that we don’t really need to specify /dev. It is enough if you just specify vdb.
  • –persistent indicates that the disk that attached to the VM will be persistent.

As you see below, the new /dev/vdb is now available on the VM.

# fdisk -l | grep vd
Disk /dev/vda: 10.7 GB, 10737418240 bytes
/dev/vda1   *           3        1018      512000   83  Linux
/dev/vda2            1018       20806     9972736   8e  Linux LVM
Disk /dev/vdb: 7516 MB, 7516192768 bytes

Now, you can partition the /dev/vdb device, and create multiple partitions /dev/vdb1, /dev/vdb2, etc, and mount it to the VM. Use fdisk to create the partitions as we explained earlier.

Similarly to detach a disk from the guest VM, you can use the below command. But be careful to specify the correct vd* otherwise you may end-up removing wrong device.

# virsh detach-disk myRHELVM1 vdb
Disk detached successfully

4. Save Virtual Machine Configuration

If you make lot of changes to your VM, it is recommended that you save the configurations.

Use the virsh dumpxml file to take a backup and save the configuration information of your VM as shown below.

# virsh dumpxml myRHELVM1 > myrhelvm1.xml

# ls myrhelvm1.xml
myrhelvm1.xml

Once you have the configuration file in the XML format, you can always recreate your guest VM from this XML file, using virsh create command as shown below:

virsh create myrhelvm1.xml

5. Delete KVM Virtual Machine

If you’ve created multiple VMs for testing purpose, and like to delete them, you should do the following three steps:

  • Shutdown the VM
  • Destroy the VM (and undefine it)
  • Remove the Disk Image File

In this example, let us delete myRHELVM2 VM. First, shutdown this VM:

# virsh shutdown myRHELVM2
Domain myRHELVM2 is being shutdown

Next, destory this VM as shown below:

# virsh destroy myRHELVM2
Domain myRHELVM2 destroyed

Apart from destroying it, you should also undefine the VM as shown below:

# virsh undefine myRHELVM2
Domain myRHELVM2 has been undefined

Finally, remove any disk image file that you’ve created for this VM from the /var/lib/libvirt/images directory:
Now you can remove the disk img file under /var/lib/libvirt/images

rm /var/lib/libvirt/images/myRHELVM2-disk1.img
rm /var/lib/libvirt/images/myRHELVM2-disk2.img

Expect script suppress output

You might write your program like this:

#!/bin/sh
output=$(expect -c '
# suppress the display of the process interaction
log_user 0

spawn telnet '"$HOST $PORT"'
sleep 1
send "\r"
send "\r"
# after a prompt, send the interesting command
expect Prompt> { send "dir\r"  }
# eat the \n the remote end sent after we sent our \r
expect "\n"
# wait for the next prompt, saving its position in expect_out(buffer)
expect -indices Prompt>

# output what came after the command and before the next prompt
# (i.e. the output of the "dir" command)
puts [string range $expect_out(buffer) \
                   0 [expr $expect_out(0,start) - 1]]
')
echo "======="
echo "$output"
echo "======="