David Lane

About David Lane

Installing MediaWiki on Ubuntu 18

A buddy send a request. He was installing MediaWiki on Ubuntu and he was having issues so he asked me to take a look. I reviewed a link on Linux Support and HowtoForge on installing MediaWiki, and found them to be a tad dated. So, I went through the installation myself, and here is how I installed it.

All steps are done as an sudoer or as the root user. I did this on AWS with a Ubuntu 18.04 minimal base image. I assume you know how to log into a console. I used Apache. You can use Nginx, but the server directions are different and I did not have a chance to try them out.

Update the OS

sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xF1656F24C74CD1D8
sudo add-apt-repository "deb [arch=amd64,arm64,ppc64el] http://mariadb.mirror.liquidtelecom.com/repo/10.4/ubuntu $(lsb_release -cs) main"
sudo apt-get update
sudo apt-get upgrade

Install basic packages

sudo apt-get install -y apache2 software-properties-common
sudo apt -y install mariadb-server mariadb-client
sudo apt install php libapache2-mod-php
sudo apt-get install imagemagick php7.2-fpm php7.2-intl php7.2-xml php7.2-curl php7.2-gd php7.2-mbstring php7.2-mysql php7.2-mysql php-apcu php7.2-zip

Once PHP is installed you will get a notice similar to:

NOTICE: Not enabling PHP 7.2 FPM by default.
NOTICE: To enable PHP 7.2 FPM in Apache2 do:
NOTICE: a2enmod proxy_fcgi setenvif
NOTICE: a2enconf php7.2-fpm

I enabled it after the fact and it worked. You can do it now or later as you desire.

Modify PHP settings (Optional)

If you are putting your server into production, use the following settings initially. If you are just looking around, the default php.ini settings are fine except for the timezone settings. You should set the timezone appropriately.

For production, edit /etc/php/7.2/apache2/php.ini and make the following changes:

memory_limit = 256M
upload_max_filesize = 100M
max_execution_time = 360
date.timezone = America/New York

Run the secure installation for MariaDB (Optional)

If you are running a production server, you should do a secure installation.

sudo mysql_secure_installation

Create the MediaWiki table space

Login to MariaDB

mariadb -u root -p

And create the MediaWIki user and db as follows

CREATE DATABASE mediadb;
CREATE USER 'media'@'localhost' IDENTIFIED BY 'password';
GRANT ALL ON mediadb.* TO 'media'@'localhost' IDENTIFIED BY 'password' WITH GRANT OPTION;
FLUSH PRIVILEGES;
EXIT;

Where password is a secure password. This will be put into the MediaWiki configuration later, so do not forget it. The database mediadb and user media can be anything you want them to be.

Edit Apache’s site configuration

You will need to add MediaWiki to the site configuration. Create a new file called mediawiki.conf

sudo vi /etc/apache2/sites-available/mediawiki.conf

And add the following:

<VirtualHost *:80>
ServerAdmin admin@example.com
DocumentRoot /var/www/html/mediawiki/
ServerName example.com
<Directory /var/www/html/mediawiki/>
Options +FollowSymLinks
AllowOverride All
</Directory>
ErrorLog /var/log/apache2/media-error_log
CustomLog /var/log/apache2/media-access_log common
</VirtualHost>

Where the ServerAdmin variable should be real email address and the ServerName should be the domain name of the server. Also, ensure that the DocumentRoot is correct. If you only want to use MediaWiki, you can set the DocumentRoot to /var/www/html, but you have to modify a step below as well.

Restart everything

Do not restart the server yet! Instead, restart the key services.

sudo a2ensite mediawiki.conf
sudo a2enmod rewrite
sudo systemctl start apache2
sudo systemctl enable apache2
sudo systemctl start mariadb
sudo systemctl enable mariadb

Download the current MediaWiki source

From the MediaWiki site, make sure you have the correct version. As of this writing, it is: mediawiki-1.33.1

Change to a temporary directory, download, untar, and move the file to the web server:

wget https://releases.wikimedia.org/mediawiki/1.33/mediawiki-1.33.1.tar.gz
tar zxvf mediawiki-1.33.1.tar.gz 
sudo mkdir -p /var/www/html/mediawiki
sudo mv mediawiki*/* /var/www/html/mediawiki

If you modified the DocumentRoot in the Apache configuration to /var/www/html, you will need to modify the command above. You will only need to move the contents of the base mediawiki folder:

sudo mv mediawiki*/* /var/www/html

Point your browser at the web site

Depending on your confirmation you can either use localhost or the hostname of your server. If you use the mediawiki folder option, you have to put the folder on the end.

http://<hostname>/mediawiki

Good luck!

Web Links

MAX and Human Errors

What Really Brought Down the Boeing 737 Max? – The New York Times

In the drama of the 737 Max, it was the decisions made by four of those pilots, more than the failure of a single obscure component, that led to 346 deaths and the worldwide grounding of the entire fleet.

I am not a pilot, and I have never been at the controls of an airplane. This very long article does go into a number of issues surrounding a complicated piece of technology. Take a read. It does not take any responsibility off of Boeing, but it certainly does not make them out to be the only villain in the story.

Checks and Balances

Trump impeachment: Lindsey Graham will ‘not pretend to be a fair juror’

Asked if it was appropriate for him as a prospective juror to be discussing the case in such terms, he said: “Well, I must think so because I’m doing it.”

Once upon a time, the Founding Fathers instituted a provision to remove a sitting President for High Crimes and Misdemeanors. Since that time four US Presidents have come under those provisions. Prior to this case, the Senators who should be trying the case (as jurors) kept their opinions to themselves. Not this time. This time, they are coming out and telling us exactly how they are going to vote. Before even hearing one witness. Before even seeing one document. They feel that this is a hit job and this is how they are going to vote.

If they were a real jury of their peers, they would be dismissed, at the very lest. I am sure there would be others ramifications. But these are United States Senators. The Founding Fathers are wondering what has happened to the Republic they strove so hard to create.

That is a big hole

Scientists have discovered deepest point on land | WTOP

The trough is about 3.5 km (about 2 miles) below sea level but there is no ocean water there. Instead, it is filled with ice flowing from the interior of the ice sheet towards the coast. The trough measures about 100 km in length and is 20 km wide, according to the study.

Think about how long 100 km is. The District of Columbia is 16 km on a side. According to Wikipedia, 100 km is 9/10 as long as the English Channel and not quite as wide as the narrowest point. And it is on land! Think about that with your morning coffee.

Review of the YSmart TIPEN

YSmart introduced a new pen on Kickstarter easier in 2019 (and it is now available to purchase on Indiegogo. Since I have not met a pen that I did not like, especially one made out of virtually indestructible, go any where metal, I bought into the program. My pens arrived this week and here is my review.

First, this pen is tiny. I mean really, really small. For comparison, in the image above, we have the YSmart TIPEN resting against the ruler, end to end it is barely 2 inches long. For scale, above we have the Fisher Bullet Space pen, a basic black marker and a standard, freshly sharpened number 2 pencil. However, uncapped, the pen is even smaller.

Unlike the Bullet pen, which will take its cap on back, and gives you and extra inch or so, the TIPEN cap will not fit on the back, leaving the pen at 2 inches. For those of us with long fingers, this becomes a bit of a problem when writing, especially if you are used to resting the pen against your finger.

This also impacts the quality of your writing, especially over time. You will not be writing long epistles with the TIPEN, but it is useful to have around for quick notes and shopping lists.

The ink is similar in feel to the Fisher refill, which is why I chose it for comparison. It is not a ball point in per se, nor is it a gel ink (my preferred ink in non-fountain pens). It writes smoothly and with no skip once started. YSmart claims additionally that the nib is unbreakable, and suitable for opening packages, paint cans, and non-writing functions.

For an EDC pen, it would not be my first choice. Despite its slightly larger size, I would select the Bullet pen, or its brother, the Trekker pen with a key chain ring attachment, but for an emergency pen, the TIPEN is a good choice. You can put it on your key ring and forget about it until you need it.

Historic…what?

Lawmakers unveil details of ‘historic’ federal paid parental leave benefits | Federal News Network

The annual defense policy bill, if passed by both chambers of Congress and signed into law by the president, would grant federal employees up to 12 weeks of paid leave for the birth, adoption or foster of a new child.

If you live in the United States, and you are an employee of the United States Federal Government, and you are planning to have a family, this is a wonderful benefit, assuming the bill is actually passed, which with this current government, is doubtful. However, to call it historic, or even wonderful, falls well short of the mark.

Other backslapping terms cited in the article include watershed, life-changing, and monumental. If the bill passes, this will go into effect in October of 2020, more than a year from now. Please make a note of that in your family planning.

Why am I so contemptuous of this policy? For starters, this only applies to employees of the Federal Government. Without a calculator, I cannot accurately estimate the impact. Still, the number of people that this will benefit is a fraction of a percent of the overall workforce in the United States when you consider those of childbearing or family starting years and those who are actual Federal Employees. This does not cover contractors or anyone else that toils for a paycheque in the United States.

I am also derisive of this policy because it still falls considerably short of the policies for other First World/Developed nations around the world. According to the United Nations, of 193 countries, only a handful do not have any national paid parental leave law. Guess who they are? New Guinea, a few South Pacific island nations and the United States. The Federal law would align the benefit for Federal Employees with the basic minimums that are already prevalent in the world, which means we are not on par with countries like Sudan (oh, wait, never mind, everyone in Sudan gets 13 weeks).

Most of the developed nations start at 26 weeks and go up from there, with a guarantee that your job will still be there should you decide to return. Let me say that again. If you go on maternity leave, most policies guarantee that the employee will have a job when they return from their leave. It is not clear that such a guarantee is in this bill. I will need to check that, but I would hope it is. Of course, this bill was supposed to be a larger bill that also provided for the care of sick family members, a growing problem in the United States as the cost of health care skyrockets, and the population is aging at a rapid rate. That is still a significant hurdle that many people, not just Federal Employees, need to overcome on a daily basis.

I congratulate the Federal Government. An organization that has been the whipping boy of both the President and Congress, where hiring the best and the brightest has never been easy, now has a benefit worth writing home about. If the bill gets passed. And signed. And not watered down in committee, and any of a dozen other things that could happen before October of next year. Now, what about the rest of the population?

I Am Not Filling Out Your Survey

And if I do, you won’t like the outcome.

Grade inflation, or rather star inflation, is rampant in online shopping. I blame Amazon. But it has gotten so carried away that everyone from the checkout clerk at the grocery store to the guy that sends me simple screws is asking for my review of their performance, or their product. Let’s face it; I do not have the time.

If you keep insisting, I am going to grade you precisely the same way I graded my employees. I used to work for a company with a hideous annual review process. If you did your job, you got a three out of five. If you aspired for a higher grade, there were strict criteria. To get a four, you had to be recognized as an expert or leader by other people in the company. We had over seven hundred people. To be recognized in this way, it rarely happened. We used to say god had you on speed-dial. To get a five, you had to be recognized as a leader in your industry. God had you on speed-dial.

I am going to take the same tact with surveys. If it does the job, you get a three. To get more stars beyond that, you are going to have to rock my world. Second, if you sell me a product, do not ask me for my initial opinion. I probably have not had time to use it effectively. Ask me again in another month. Or another six months. That will allow me to evaluate your product correctly. Frankly, unless it changes my world, you will still only get three stars. The number of things that have risen to that level through my life is so minimal that it cannot be counted.

Just stop asking.

When Privacy and Reality Interconnect

His privacy being paramount, Kelly grudgingly chooses to head into Columbia every so often, rather than cede his data to Google or turn over his purchase history to another online retailer. “I’m just not sure why Google needs to know what breakfast cereal I eat,” the 51-year-old said. Washington Post

There are a couple of things to notice here.

First: Google is not the only company out there snarfing up your data. Zuckerbergland apps, Verizon (you know, AOL, Yahoo, Tumblr), Microsoft (Linkedin, Bing, all those Microsoft apps like Word, etc) are only some of them.

Second: Most websites have some form of tracking software on them, and they can be related to any of the three or more listed above.

Third: Despite what the EU would have you believe, GDPR is not your salvation, as many websites, in the small print, outside the EU say this site not intended for consumption by people in the EU which means that the GDPR has zero impact.

And realistically, if you do not want to be tracked, there is only one way to avoid it. Stay off the Internet. And that includes no smart devices (there is tracking software on them too), no credit cards (who do you think came up with the idea of tracking purchases), and no cheques. In fact, depending where you live, you are being watched by CCTV cameras, where the video is uploaded and searched for malcontents, using AI and facial recognition software. If you travel, you are tracked whether by planes, trains, or automobile (toll plazas, rest stops…). Let’s face it, unless you are a hermit, you have no privacy.

And ironically, we all know that Mr Kelly, who is 51 years-old, likes to eat Bob’s Red Mill muesli cereal. So his privacy is now shot too, because he talked to a reporter, and the story ended up…on the Internet.

Monitoring with Prometheus Exercises

Time: 60 minutes

Audience: Developers & DevOps Personnel

Purpose: Introduce basic monitoring using Prometheus so you may apply new skills.

Goals:

  • Instrument an application using Prometheus
  • Visualize data using Grafana

Prerequisites

Before beginning these exercises, it is assumed that the student has a certain level of familiarity with Linux and the command-line, as well as Git commands.

Set Up The Environment

For this exercise, we will use a version of Linux (CentOS) provided by Hashicorp called a Bento Box. If you have experience with other virtual environments, you can use them.

Regardless of which environment is selected, it is assumed that some Linux command line skills are possessed.

If the selected environment is a physical environment, skip ahead to download the correct version of Prometheus for your OS. If a virtual environment is selected, please follow the following steps.

A host operating system is the OS the virtual environment will run on. The guest is the virtual environment.

Virtual Box and Vagrant

  1. Download the appropriate version of Virtual Box and the VM VirtualBox Extension Pack.
  2. Install Virtual Box according to the instructions for your host Operating System.
  3. Download the appropriate version of Vagrant for the host operating system you downloaded for Virtual Box. (For Windows, you will want the 64-bit version for any OS Windows 7 and up.)
  4. Install Vagrant according to the instructions for your host Operating System1.
  5. Verify that Vagrant installed correctly. Open a command line (Terminal) and type vagrant --version
  1. Fetch an appropriate Vagrant box. A network connection from the host is needed for this. For this exercise, we will use the CentOS 7.6 Bento Box. To get the box, type vagrant init bento/centos-7.6 2:

  1. Everything is ready, start the box. Type: vagrant up
  1. Verify there are no errors. If not the box started successfully, if there are errors, check them carefully. Any errors are likely related to networking issues, or limits on the access to the PC.
  2. Once the Virtual Box is up, log in. Open a command prompt and type vagrant ssh

It is important to put vagrant in front of the ssh command so it finds the key correctly to log in. Feel free to tour around inside the guest.

  1. In order to access the Prometheus web port it is necessary to modify the Vagrant file, usually located in the root folder of the host’s home directory. Either ~/Vagrantfile on a Mac, or C:\Users\username. Look for the line: # config.vm.network "private_network", ip: "192.168.33.10". The comment # can be removed, or the line retyped. Any valid IP address can be used.
  1. Reload the guest. Type: vagrant reload.

Add Git

You will need to add the git application on the virtual machine.

  1. Ensure you are logged into the guest with vagrant ssh
  2. Type $ sudo yum install git -y
  3. It will install about 32 packages and quit without error.

Add Docker

To demonstrate the Java metrics and Grafana, we will install a Docker container with some prebaked examples, but to run them, we need to install Docker.

  1. Install the needed packages.
    $ sudo yum install -y yum-utils device-mapper-persistent-data lvm2
  2. Configure the repository.
    $ sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
  3. Install the Docker Community Version.
    $ sudo yum install docker-ce
  4. Add the User to the Docker group.
    $ sudo usermod -aG docker $(whoami)
  5. Set Docker to start at boot.
    $ sudo systemctl enable docker.service
  6. Start Docker.
    $ sudo systemctl start docker.service

For this exercise we will not need Docker Compose.

Install Prometheus

  1. Visit the Prometheus website and download the most recent version of Prometheus for AMD64 Linux platforms (for example prometheus-2.9.2.linux-amd64.tar.gz).
  2. Transfer the binary to your guest. This can be done via SCP. On the host OS, install the vagrant-scp plugin. Type: vagrant plugin install vagrant-scp Then type: vagrant scp local_file_path_in_HostOS :remote_file_path_in_GuestOS. For example: vagrant scp prometheus-2.7.2.linux-amd64.tar.gz :~/ (If your guestOS is not running you will get an error message. You can type vagrant up to start it).

  1. The SCP will put the binary in the root of the home directory of the default (vagrant) user.
  1. Untar the file. Type tar -zxvf prometheus-2.7.2.linux-amd64.tar.gz
  1. Change into the Prometheus directory. You can test Prometheus by typing $ ./prometheus which will spin up a number of messages like:

And then type <CTRL>-C to stop it, for now.

Instrumenation

Adding Java instrumentation to your code can be managed in a couple of ways. As we discussed in the class, we have the following examples.

Counter Example

import io.prometheus.client.Counter;
class YourClass {
  static final Counter requests = Counter.build()
     .name("requests_total").help("Total requests.").register();

  void processRequest() {
    requests.inc();
    // Your code here.
  }
}

Gauge Example

class YourClass {
  static final Gauge inprogressRequests = Gauge.build()
     .name("inprogress_requests").help("Inprogress requests.").register();

  void processRequest() {
    inprogressRequests.inc();
    // Your code here.
    inprogressRequests.dec();
  }
}

Putting it all together

import io.prometheus.client.Counter;
import io.prometheus.client.Gauge;
import io.prometheus.client.Histogram;
import io.prometheus.client.Summary;
import io.prometheus.client.exporter.HTTPServer;

import java.io.IOException;
import java.util.Random;

public class Main {

   private static double rand(double min, double max) {
       return min + (Math.random() * (max - min));
   }

   public static void main(String[] args) {
       Counter counter = Counter.build().namespace("java").name("my_counter").help("This is my counter").register();
       Gauge gauge = Gauge.build().namespace("java").name("my_gauge").help("This is my gauge").register();
       Histogram histogram = Histogram.build().namespace("java").name("my_histogram").help("This is my histogram").register();
       Summary summary = Summary.build().namespace("java").name("my_summary").help("This is my summary").register();

       Thread bgThread = new Thread(() -> {
           while (true) {
               try {
                   counter.inc(rand(0, 5));
                   gauge.set(rand(-5, 10));
                   histogram.observe(rand(0, 5));
                   summary.observe(rand(0, 5));


                   Thread.sleep(1000);
               } catch (InterruptedException e) {
                   e.printStackTrace();
               }
           }
       });
       bgThread.start();

       try {

           HTTPServer server = new HTTPServer(8080);
       } catch (IOException e) {
           e.printStackTrace();
       }
   }
}

Using io.prometheus

You can also modify the build to include the appropriate repository. For Maven, you edit your POM.XML something like this:

<!-- The client -->
<dependency>
  <groupId>io.prometheus</groupId>
  <artifactId>simpleclient</artifactId>
  <version>0.6.0</version>
</dependency>
<!-- Hotspot JVM metrics-->
<dependency>
  <groupId>io.prometheus</groupId>
  <artifactId>simpleclient_hotspot</artifactId>
  <version>0.6.0</version>
</dependency>
<!-- Exposition HTTPServer-->
<dependency>
  <groupId>io.prometheus</groupId>
  <artifactId>simpleclient_httpserver</artifactId>
  <version>0.6.0</version>
</dependency>
<!-- Pushgateway exposition-->
<dependency>
  <groupId>io.prometheus</groupId>
  <artifactId>simpleclient_pushgateway</artifactId>
  <version>0.6.0</version>
</dependency>

For more examples, please consult the Prometheus Java Client page.

Java Metrics and Prometheus

A Docker container with the examples above can be installed.

  1. Type $ git clone https://github.com/sysdiglabs/custom-metrics-examples
  2. Type $ sudo docker build custom-metrics-examples/prometheus/java -t prometheus-java

Depending on the state of your Docker container, it may rebuild itself while it downloads. This will take a couple of minutes.

  1. Type $ sudo docker run -d --rm --name prometheus-java -p 8080:8080 -p 80:80 prometheus-java
  2. Once you have a running container, check the available output by typing $ curl localhost:8080 which will give you something like:

These are now exporting successfully from the application. Now we need to configure Prometheus.

  1. Change into the Prometheus directory
  2. Move or delete the prometheus.yml file. This is the core configuration file for Prometheus.

This file is YAML (Yet Another Markup Language) and so spaces matter. Because our Java is being exported to port 8080, we will configure Prometheus to listen on 9090.

global:
  scrape_interval: 10s
scrape_configs:
 - job_name: prometheus
   static_configs:
    - targets:
       - localhost:9090

This is a basic configuration.

global: Defines the global parameters for Prometheus operation. The scrape interval is the time, in seconds, between system scrapes. Prometheus defaults to one minute [1m] as the time between scrapes. For demonstration purposes we will use ten seconds. The actual time should be a balance between need for metrics and load on the system.

scrape_configs: Define job name, labels, and other things that define the scrape. It scrapes http by default, but can scrape other protocols as configured. There is also the ability to pass usernames and passwords if needed. The entire documented suite of variable is available in the Prometheus documents.

For this exercise, we will scrape the localhost, port 9090, every 10 seconds. The job name is prometheus. Save the YAML above as prometheus.yml, then start Prometheus with $ ./prometheus.

Open a web browser and point it at: 192.168.33.10:9090 (or the address you configured in Step 10 in setting up Vagrant). This is the main visualization page. From here you can examine a number of default metrics collected by Prometheus. You can verify the host status by clicking on Status | Targets, which will show you:

If you click on the Prometheus name, you will return to the PromQL Expression Browser main screen where you any type any number of PromQL (the Prometheus Query Language) queries. For example, by typing up will result in a returned value of 1, for up. This is useful for debugging.

If you click on Insert Metric at Cursor, you can get an idea of the default metrics that can be viewed. You will notice there are no Java variables.

To add them:

  1. Stop Prometheus (<CTRL>-C)
  2. Edit the YAML file to add the 8080 port. It will look like this:
global:
  scrape_interval: 10s
scrape_configs:
 - job_name: prometheus
   static_configs:
    - targets:
       - localhost:9090
       - localhost:8080
  1. Restart Prometheus.
  2. Visit your browser and the Java variable will show up.
  1. You can visualize them with the built in graph feature (this view is after the Prometheus server has scrapped several cycles).

Prometheus configuration

There are hundreds of configuration options for Prometheus, including hosts and other library metrics. For example, if you have number of hosts, you might see a configuration like this:

global:
  scrape_interval: 10s
  evaluation_interval: 10s
rule_files:
 - rules.yml
scrape_configs:
 - job_name: prometheus
   static_configs:
    - targets:
       - localhost:9090
       - 192.168.1.181:9090
       - 192.168.1.183:9090
       - 192.168.1.184:9090
       - 192.168.1.181:8080
 - job_name: node
   static_configs:
    - targets:
       - localhost:9100
       - 192.168.1.181:9100
       - 192.168.1.183:9100
       - 192.168.1.184:9100

This configuration comes from a network with four instances of prometheus, collecting data from the node_exporter module (which exports metrics related to the OS, network, and other node related information. You would get a report like this if we aggregate network interfaces:

The Prometheus documentation provides a number of configuration suggestions, and there is a prometheus channel in Stack Overflow with just as many suggestions. There is also a great book available from O’Reilly’s Safari subscription Prometheus Up and Running.

Grafana

The Prometheus dashboard is handy for quick sanity checks, but it is not a great solution for long-term use. Enter Grafana, a popular tool for building just these sorts of dashboards.

The easiest way to plug into our existing configuration is to grab a copy of the Grafana Docker console. This will not use a volume mount so it is not a good long term solution.

$ sudo docker run -d --name=grafana --net=host grafana/grafana:5.0.0

Grafana will present itself on port 3000. Log in as admin/admin

  1. Change the Data Source to Prometheus (you can name it whatever you want. I chose Prometheus)
  1. Make sure to set your address correctly, click Save & Test and you will get this:
  1. Return to the dashboard, click edit and type in java_my_guage and you will get something like this being pulled from your Prometheus instance.
  1. Over time, your graphs will begin to fill in.

Putting it all together

Now that you have an overview of the pieces, take some existing code and modify it to include counters or gauges as appropriate, build it, and then run it where Prometheus can capture the metrics. Once you have done that, validate that they are being picked up by looking in the Prometheus web page and then in Grafana.

Additional Exercises

  • Investigate and look at $ /home/vagrant/custom-metrics-examples/prometheus/java/src/main/java/Main.java in the custom-metrics-examples repository you cloned
  • Grab the Node_Exporter module, add it to your Prometheus engine and experiment with the variables.
  • Try out some PromQL, for example rate(node_network_receive_bytes_total[1m])
  • Play with labels in Prometheus. How can they benefit you?
  • Look at the alerting module. What changes would you need to make to the alertmanager.yml to make it work in your environment?
  • Play with Grafana. What other visualizations can you create?
  1. This is not a course in how to use, or manipulate Virtual Box or Vagrant. If you need more help there is plenty of documentation on the Internet to help you out. Generally, both Virtual Box and Vagrant install without issues.
  2. If you have other vagrant boxes configured, you will get an error about an existing Vagrant file. You can modify your existing Vagrant file to include this new box.