Mermaid For Diagrams

A couple of months ago, I found a program called Mermaid, which is a javascript framework for translating markdown into process diagrams, and other relationships diagrams. This is a really cool thing.

Most of us, at some point in their career has been forced to create a process diagram of some kind. Or maybe its an Entity Relationship Diagram (ERD) or diagram a user journey. In many cases, if it is a simple diagram, you will open your PowerPoint analog program and hope you can make the boxes and arrows do what you want them to do. For more complicated diagrams, you probably opened a custom diagraming tool, like Lucid, or Miro, or Visio.

The largest problem with all of these tools is keeping the diagram current. The second problem is sharing the document. While many people think everyone runs Microsoft Office, the reality is that Office, and its analogs are falling behind the state of the art. Producing documents on tablets and phones need lighter weight applications, and what is lighter weight than raw text?

Markdown, properly parsed, can create elegant, multi-platform documents that can be managed in standard version control systems without a large amount of syntactic overhead associated with even the lightest weight outputs from applications like Word. Markdown supports PDFs, web pages, scripts, tables, and now, with Mermaid, complex diagrams!

Because of the various levels of support in browsers, and operating systems, there are many different ways to parse the data so everyone can see it. Let’s look at an example.

I took an existing process diagram that I created in Lucidchart and translated it into Markdown, and the code looks like this:

flowchart LR
    A[Rally Stories & Tasks] --> B[Developer Creates Code]
    B --> |GitHub Enterprise| C{Security Scan}
    C --> |Scan Fail| A 
    C --> |Scan Pass| D[Build Unit Artifact]
    D --> E{Automated Tests incl TDD-BDD-Smoke}
    E --> |Pass| G[Automatic Build of Materials]
    E --> |Fail| F[Teams Alert for SM]
    F --> A
    G --> H[Deployment to End-to-End]
    H --> I{Automated Tests incl TDD-BDD-Smoke}
    I --> |Pass| J[Artifactory-Promotable]
    I --> |Fail| F 

Using the Mermaid Markdown parser (available from the Mac or iOS store), you get a simple diagram that looks like this:

But what is really nice, again, depending on your browser support, and underlying OS, if I embed the code above and wrap it in tags, you will get the embedded diagram (above) or you will get raw text, like the code block above (at least in WordPress). What is really nice, is if you check it into Github (change the wrappers), you have live diagrams, that are in version control and can be easily updated as the project or goals change. No more passing around files that cannot be edited, or can only be edited by one person. No more asking is this current. And that makes knowledge transfer easier.

Depending on your browser and OS, what is below is either the diagram, or the code.

flowchart LR A[Rally Stories & Tasks] --> B[Developer Creates Code] B --> |GitHub Enterprise| C{Security Scan} C --> |Scan Fail| A C --> |Scan Pass| D[Build Unit Artifact] D --> E{Automated Tests incl TDD-BDD-Smoke} E --> |Pass| G[Automatic Build of Materials] E --> |Fail| F[Teams Alert for SM] F --> A G --> H[Deployment to End-to-End] H --> I{Automated Tests incl TDD-BDD-Smoke} I --> |Pass| J[Artifactory-Promotable] I --> |Fail| F

Understanding The Cloud

I had the opportunity to teach last month. One of the topics I covered was the cloud as an environment and as a platform. One of the most significant issues I had conveying the information was a general lack of understanding of just what comprises today's platforms and an incomplete understanding of just how it all works. I ended up describing in detail the process of sending data from one machine to another, regardless of whether that machine was a phone or a computer and how it traversed the network, be it cellular or physical cable. I thought this explanation might benefit others.

ISO OSI layer model

Before we can discuss the process, we have to understand our stack. In this case, it is the standard ISO Open Systems Interconnection (OSI) model. The model, from top to bottom, looks like this:

  • Layer 7 - Applications Layer
  • Layer 6 - Presentation Layer
  • Layer 5 - Session Layer
  • Layer 4 - Transport Layer
  • Layer 3 - Network Layer (sometimes called the routing layer)
  • Layer 2 - Data Link Layer (sometimes called the switching layer)
  • Layer 1 - Physical Layer

It is essential to understand what happens at each of these layers from a theoretical perspective, especially if you have responsibilities to debug a problem in your environment. It is also essential to recognize that it is a model of how data flows through the system. Certain aspects of the model might be bypassed by specific applicational or protocol purposes. But generally, if you understand this model, the rest of the process will flow from here.

In brief:

Layer 1 - Physical

The physical layer is responsible for the transmission and reception of unstructured raw data between a device and a physical transmission medium such as ethernet (Cat 5/6 copper cable) or any of the various forms of fibre (used in both network and server to data transfers) or coax (used primarily for long haul and building connections). It converts the digital bits into electrical, radio, or optical signals. It is the cables that push data between servers and between servers and storage. Bluetooth can be thought of as a physical layer connection, although it did not exist when the original model was developed. X.25 is one of the earliest protocols developed to support the physical layer.

Layer 2 - Data Link Layer

The data link layer provides node-to-node data transfer—a link between two directly connected nodes. Here we begin to talk about the frames of a data packet and the establishment of both a medium access control (MAC) layer, where devices gain access to the network layer protocols, and the Logical link control (LLC) layer where encapsulation and error checking and frame synchronization begins. This is where we see Ethernet standards applied, WiFi standards, and the old Point-to-Point (PPP) standards appear.

Layer 3 - Network Layer

The network layer provides the functional and procedural means of transferring packets from one node to another connected in different networks, effectively routing packets from one network to another, with intelligence. When we talk about routing protocols, we hear terms like EGIRP (Cisco proprietary) and OSPF. Older protocols include RIP. IPSec also happens at layer 3.

Layer 4 - Transport Layer

The transport layer provides the functional and procedural means of transferring variable-length data sequences from a source to a destination host while maintaining the quality of service functions. We start talking about the size of a data packet in the network (or frame size). The standard for a TCP packet is 1500 bytes in length with a payload of about 32bits of the total frame. Packets larger than this size may be transmitted if all the routers and switches in the network agree to it, but if you are connecting to the Internet, that is all you get. As a result, large data transfers segment their data into many(!) TCP packets.

This layer is also responsible for flow control but not for reliability. That is the responsibility of the protocol. TCP, as a protocol, is chatty. It acknowledges each packet sent and received, thus ensuring reliability. UDP is what we call an unreliable protocol. If something transfers over UDP, there is no guarantee mechanism or message to ensure it arrived successfully.

Layer 5 - Session Layer

This layer is responsible for dialog control. It establishes, manages, and then terminates the connections in either full-duplex, half-duplex, or simplex. This is also where session checkpointing occurs, such as with remote procedure calls (RPCs). As part of this model, the session layer is purely theoretical, and in practice, the Session Layer is integrated into other parts of the TCP/IP stack.

Layer 6 - Presentation Layer

The presentation layer is the context switch between application layer items. This is where mapping occurs (if needed), encapsulation (TLS), or other processes to move data up or down the communication stack.

Layer 7 - Application Layer

This is where the user generally interacts with the application. Where Graphical User Interfaces are drawn and displayed and where input and output occur.

While this is all grossly simplified, it highlights several steps where things can go wrong. Still, it also highlights why you need to know how your application will intersect with the various subsystems below it and what that impact might mean in terms of resource allocation, consumption, and application performance over time.

The Server Farm

A sample server rack diagram
A sample server rack diagram

The cloud is nothing more than many servers, working together, with or without some form of storage. Technically, today, the cloud is more about the application you are interacting with and less about where the environment the application runs is located or how it is constructed. It is crucial to understand, at least in theory, what the application is doing, where it lives, and why it all works the way it does1.

Companies that provide cloud services (like Amazon Web Services and Microsoft Azure) maintain large data centers, essentially warehouses, filled with equipment. A lot of equipment. Most of this equipment lives in racks nineteen inches across and 42 U (a technical term) in height2.

A typical rack includes:

  • The rack (and screws, do not forget the screws)
  • A power source (most are DC powered in data centers)
  • A rack-based router (for all those servers)3
  • The servers (1U - 4U boxes)

The servers will compose most of the space in the rack, so anywhere from 38 servers (at 1U) to 9 servers (at 4U). Large servers tend to be reserved for more specialized purposes, like running database platforms within the environment (think Amazon Redshift or Azure SQLServer).

In other parts of the datacenter, you will find racks dedicated to storage, routing, and other operational requirements (like the Amazon Marketplace) or CI/CD for host management. Most data centers do not run at 100%. At any time, a host (or a rack of them) will be down for maintenance or replacement, hard disks need tending, and wires and cables sometimes need to be repositioned to add more capacity or change it.

But there is still one more layer of abstraction that needs to discuss. It is unlikely you will run your application on a server directly, what we call bare metal. In most cases (all cases for AWS or other commercial cloud vendors), you will run in a virtualized (guest) space on the host OS (the OS on the bare metal). Such hosts include VMWare's vSphere, or Microsoft Hyper-V, or Linux's KVM. Through magical trickery4, the host mirrors the bare metal for the guest OSs and depending on the resources of the bare metal, and you can run multiple guests per server. With shared storage, you can interconnect storage across various hosts. This becomes the power of the cloud.

When is the cloud a platform?

Using the above figure as a reference, up to this point, we have discussed the traditional server farm model, whether that model has all components in a single rack or a series of racks. In this case, you are responsible for managing all aspects of the environment, including the power and cooling you need to keep it functioning, along with the personnel to run it. This is not the cloud.

When we discuss Infrastructure as a Service (IaaS - second column), it becomes a cloud environment. It is at this point where the management of the environment shifts and splits. The company providing the infrastructure (such as Amazon) is responsible for all aspects of that environment up to the (guest) operating system level. From that point onward, the customer is responsible. In most cases, this means they are responsible for the security and patching of the guest and all applications running on and interacting with the guest. This may be a single host (guest) or a whole farm of interconnected hosts with containers, databases, and storage. In essence, you are purchasing traditional (albeit virtual) infrastructure from a provider. The line between cloud and traditional networks is blurred, and it is easy to confuse the two. Just remember, if you are using Infrastructure as a Service, there are still many subsystems you are not managing, nor are you responsible for keeping current.

It becomes a platform when you move into Platform as a Service (PasS). The application is where you interact, and the only responsibility you have is for the application and its associated architecture. Any patches or updates to the OS, the databases, etc., are the cloud provider's responsibility. It may require you to adapt your application based on changes in APIs or associated calls to the middleware. Still, those generally are small changes advertised well in advance of any lower-level updates. You can also reduce the skillset you need to have on staff. Typically, you are only developing your application. Most software companies are at this level if they are developing their own applications.

Finally, there is Software as a Service (SaaS). Offerings like Salesforce, Workday, Lucidchart, where you rent application space are SaaS. While you might configure the application or write additional customizations to address gaps, you are not responsible for the underlying platform or application. Updates are delivered to you, with warnings when there are systemic changes you need to account for or prepare for, but you are renting the application and using your data at the end of the day. As the consumer, you do not have to do anything.

Moving the data

One of the reasons for the success of the Internet is the ability to move data from here to there through standard, well-understood protocols. Before the 1990s, most of this was via dedicated communication lines between universities and certain federal agencies. These systems were almost always Unix-based and utilized TCP/IP as the communications protocol. Tim Berners-Lee had not developed the HTTP protocol for sharing documents, no Google for finding sites, and there certainly was not enough bandwidth for video streaming, much less the codecs. Any data transferred went via FTP or email, and security was not high on the list of essential items. Most people had never heard of the Internet. That all changed and rapidly throughout the 1990s and early 2000s.

But there was still the limitation of the physical cable, whether that was in the data center, the business office, or the connection between the house computer and the Internet Service Provider (ISP). Firms like AOL, Compuserve, and Prodigy were the first access (the on-ramps) to the information super highway. Often they came with additional protections and filters that kept people from the bad parts of the Internet. In the middle part of the 1980s, the FCC released the 2.5 GHz bandwidth space for general use (the same frequency range used by microwave ovens and today's 5G). However, it was not until the late 1990s that the first reliable WiFi interfaces were released for public use, primarily by corporations rather than home users.

At the same time, we begin to see high speed, high capacity, high bit rate lines deployed between central offices and the ISPs. Dedicated (T1 & T3) and fractional (Frame Relay, ATM) connections connected business offices to the main office and flowed through increasing large, complicated telco clouds. This further increased the concentration of data within the telco networks.

All of this would be required for the next quantum leap forward - the movement (and capability) to use mobile devices connected to the cellular network. It would take another discussion to cover the technology in the cellular network but go back to our basics. The mobile device is connected to the cell tower, where it gets an IP address (IPv6 in case you are wondering, and that is yet another discussion). Data is then encapsulated on the phone and sent up the wire (cell connection) to the tower, where it is received, translated into bits, and sent to the network. The cell tower acts as part of the physical layer. Modern-day mobile equipment is no different from your laptop or desktop in terms of networking software. Because of this, we are seeing the convergence of mobile and desktop operating systems at a speed that eclipses their initial development.

Conclusion

There are many areas of this process that have been glossed over for complexity sake. For example, the whole discussion of a TCP packet takes up three volumes, starting with TCP/IP Illustrated, Volume 1: The Protocols. Routing and switching is a skill set of its own, and storage management is a full-time job. Then there are the aspects of database management, cellular RF engineering, and the headaches of making good fibre connections (hint, polishing glass is tricky to master).

But with this overview, you should begin to understand the levels of responsibility, and more importantly, the amount of complexity that has been engineered out of the system, primarily if you work at the platform level. Clouds are not free. There is a great deal of work and costs in keeping them operational, even if you are not the one doing it.

Web Links:

  1. In many cases, the location where your application is running or the data is residing is not an issue, but remember that the cloud, especially for large cloud providers, could have data stored in locations that may have legal ramifications for that data.
  2. We call them 19" racks because of the internal space between the screw holes that hold the equipment in place. The outer dimensions of each rack generally are two feet wide by as much as two feet deep, but most servers stick out another foot or so beyond that. Rack as measured in how many rack units high (a rack unit is 1-3/4 inches) and defines how much equipment it can hold.
  3. These are often called Layer 3 switches, which combine the features of a Layer 2 switch functionality and the routing capabilities of a traditional router.
  4. We could spend another hour discussing the various coding games played by the kernel and the requirements in the CPU for virtualization to work.

Just Because You Use The Tools, It Does Not Mean You Do DevOps

I have long said that many of the tools and practices used in DevOps are suitable for Legacy software development. Their use should be encouraged. But beware. Just because you apply some DevOps and Agile practices, it does not mean you are doing DevOps. For example, just because you have automated builds in a pipeline, it does not mean you are doing DevOps. DevOps (and Site Reliability Engineering) require particular aspects of Agile to succeed. Have a stand-up? Do you actually stand up? Is the stand-up less than fifteen minutes? Do you only cover what you did yesterday, what you plan to do today, and list your blockers?

I have witnessed a few things over my career at companies that do DevOps, but no, not really. I will update this as more come to light. I am sure more will come to light.

With apologies to Jeff Foxworthy.


If you complain the Agile ceremonies are taking too much time out of your day for coding, you’re not doing DevOps.

If your stories don’t fit into your sprint, you’re not doing DevOps.

If getting a team to look at an issue requires several manual emails to more people than have appeared on Survivor, you’re not doing DevOps.

If you are outcome-driven rather than output-driven, you’re not doing DevOps.

If you have to have a laundry list of features that have to be finished before code freeze, you’re not doing DevOps.

If your features sit unused for months before they are implemented into production, you’re not doing DevOps.

If your shortest scrum of scrum meetings gave you a minute back in your day, and it was only 59 minutes long, you’re not doing DevOps.

If your API gateways are tracked manually, updated randomly, and require multiple teams to update, you’re not doing DevOps.

If your API gateways are stored in version controls, but you have to update your code manually, you’re not doing DevOps.

If your automated deployment process requires an email to be manually sent to more people than a Major League Baseball team to start the smoke test, you’re not doing DevOps.

If you track build release numbers manually on a spreadsheet or a wiki page for each environment, you’re not doing DevOps.

If they changed the password again, and we have to get it from… is a common refrain in your RCA meetings, you’re not doing DevOps.

If your request to refresh test data is met with a we don’t have the time, response, you’re not doing DevOps.

If you run out of disk space because the application or system is not monitoring and alerting, you’re not doing DevOps.

If running out of disk space causes your application to fail, you’re not doing DevOps.

If you are not using elastic environments, you’re not doing DevOps.

If the term self-healing is met with blank stares, you’re not doing DevOps.

If the phrase yeah, we noticed that bug in development too, is common during RCAs, you’re not doing DevOps.

If the phrase worked in the lower environments, is common during RCAs, you’re not doing DevOps.

If another team is writing the unit tests and behavioral tests (TDD/BDD), you’re not doing DevOps.

If you have to set up a reminder to replace or renew your certificates, you’re not doing DevOps.

If you have to call two dozen people and wait an hour for a pull request to production, you’re not doing DevOps.

If your security system worked in production, and development, but no longer works after your release, and you made no changes to the code, you’re not doing DevOps.

If a minor upgrade to production requires a meeting of more people than the cast and crew of Game of Thrones, you’re not doing DevOps.

If the first step in updating your applications is turn off the web server, you’re not doing DevOps.

If a minor update to production requires you to shut down for a day and notify the business that their systems will be unavailable, you’re not doing DevOps.

If you are more worried about data backup procedures than being down for a day, you’re not doing DevOps.

If you are more concerned with your roll-back plan than you are about being down for a day for a minor upgrade, you’re not doing DevOps.

If you need to take a day to do a minor upgrade, you’re not doing DevOps.

AWS Template Creation by Script

During an AWS architecture class, we had to create and launch an AWS Stack. Within the stack, it was Infrastructure as Code, but the actual launch of the stack was done at the console. Once upon a time, I knew I had worked with stack creation as IaC. I dug back through some of my old examples and found the code (below) that I used to create the stack, along with some of the variables.

The Code

Line numbers are for reference. Note that this is a single bash shell block (hence the “\” at the end of each line starting in line 2.

1.  cfn_stack_name="${JOB_NAME}-${pipeline_instance_id}"
2.  cfn_stack_id=$(aws cloudformation create-stack \
3.     --disable-rollback \
4.     --region $region \
5.     --stack-name "$cfn_stack_name" \
6.     --template-body "file://${cfn_template_path}" \
7.     --parameters ParameterKey=amiID,ParameterValue=$baseami \
8.         ParameterKey=vpcID,ParameterValue=$vpc \
9.         ParameterKey=subnetID,ParameterValue=$subnet \
10.        ParameterKey=keypairName,ParameterValue=$jenkins_key_name \
11.    --tags Key=BuiltBy,Value="Jenkins_$(hostname)" \
12.    --tags Key=AWS_OP_ENV,Value="$aws_op_env" \
13.    --tags Key=Server,Value="$server_function" \
14.    --tags Key=System,Value="$system" \
15.    --query 'StackId' --output text)
16. max_waitime=600
17. wait_interval=5
.
.
.
18. # wait until the stack is created
19. echo "Waiting for CFN stack to be created..."
20. time monitor_stack --region "$region" --stack "$cfn_stack_name"
21. cfn_instance_id=$(aws cloudformation describe-stacks --region $region --stack-name="$cfn_stack_name" --query 'Stacks[0].Outputs[0].OutputValue' --output text)
22. echo "CGN stack created!"

The other thing to note is you need to have the AWS CLI installed in your build environment for this to work. In most cases, you will be building this inside AWS, so the CLI will be available to you.

The Explanation

In the code starting on line 1:

cfn_stack_name="${JOB_NAME}-${pipeline_instance_id}"

The JOB_NAME and pipeline_instance_id are generated by the Jenkins job. You can name it however you want, that was just what we used. We originally started with just date/time stamps.

Line 2 begins the actually stack creation:

cfn_stack_id=$(aws cloudformation create-stack

The cfn_stack_id is generated at the end of the code block: --query 'StackID' --output text. The syntax may be old, check the documentation for the correct call for the StackID. The rest of the data is necessary to define the stack.

Most of the variables are defined higher up in the script, most based on calls to a DymanoDB instance where we would store various bits of data that may or may not have changed throughout the build process, or as defined by the customer. We also saved the stack name in that same DB system so we could tear it down later.

Finally we wrapped it with a timer value. This may need to be adjusted based on the speed of the environment or number of variables you are pushing into the stack. You want the system to error out if things are too busy, otherwise the script will hang and the build server will appear to be stuck. We also had some additional verbiage at the bottom of the script that pushed text to the log file/console output so you could see it succeed as shown in lines 18 - 22.

One other thing to note is that the stack also launches an AMI (again pulled from reference). Once this stack and associated AMI are up, the next part of the pipeline starts. This could populate the AMI, test it, turn it into a Jenkins build server, whatever was necessary. The key here is it is all code.

Installing MediaWiki on Ubuntu 18

A buddy sent a request. He was installing MediaWiki on Ubuntu and he was having issues so he asked me to take a look. I reviewed a link on Linux Support and HowtoForge on installing MediaWiki, and found them to be a tad dated. So, I went through the installation myself, and here is how I installed it.

All steps are done as an sudoer or as the root user. I did this on AWS with a Ubuntu 18.04 minimal base image. I assume you know how to log into a console. I used Apache. You can use Nginx, but the server directions are different and I did not have a chance to try them out.

Update the OS

sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xF1656F24C74CD1D8
sudo add-apt-repository "deb [arch=amd64,arm64,ppc64el] http://mariadb.mirror.liquidtelecom.com/repo/10.4/ubuntu $(lsb_release -cs) main"
sudo apt-get update
sudo apt-get upgrade

Install basic packages

sudo apt-get install -y apache2 software-properties-common
sudo apt -y install mariadb-server mariadb-client
sudo apt install php libapache2-mod-php
sudo apt-get install imagemagick php7.2-fpm php7.2-intl php7.2-xml php7.2-curl php7.2-gd php7.2-mbstring php7.2-mysql php7.2-mysql php-apcu php7.2-zip

Once PHP is installed you will get a notice similar to:

NOTICE: Not enabling PHP 7.2 FPM by default.
NOTICE: To enable PHP 7.2 FPM in Apache2 do:
NOTICE: a2enmod proxy_fcgi setenvif
NOTICE: a2enconf php7.2-fpm

I enabled it after the fact and it worked. You can do it now or later as you desire.

Modify PHP settings (Optional)

If you are putting your server into production, use the following settings initially. If you are just looking around, the default php.ini settings are fine except for the timezone settings. You should set the timezone appropriately.

For production, edit /etc/php/7.2/apache2/php.ini and make the following changes:

memory_limit = 256M
upload_max_filesize = 100M
max_execution_time = 360
date.timezone = America/New York

Run the secure installation for MariaDB (Optional)

If you are running a production server, you should do a secure installation.

sudo mysql_secure_installation

Create the MediaWiki table space

Login to MariaDB

mariadb -u root -p

And create the MediaWIki user and db as follows

CREATE DATABASE mediadb;
CREATE USER 'media'@'localhost' IDENTIFIED BY 'password';
GRANT ALL ON mediadb.* TO 'media'@'localhost' IDENTIFIED BY 'password' WITH GRANT OPTION;
FLUSH PRIVILEGES;
EXIT;

Where password is a secure password. This will be put into the MediaWiki configuration later, so do not forget it. The database mediadb and user media can be anything you want them to be.

Edit Apache’s site configuration

You will need to add MediaWiki to the site configuration. Create a new file called mediawiki.conf

sudo vi /etc/apache2/sites-available/mediawiki.conf

And add the following:

<VirtualHost *:80>
ServerAdmin admin@example.com
DocumentRoot /var/www/html/mediawiki/
ServerName example.com
<Directory /var/www/html/mediawiki/>
Options +FollowSymLinks
AllowOverride All
</Directory>
ErrorLog /var/log/apache2/media-error_log
CustomLog /var/log/apache2/media-access_log common
</VirtualHost>

Where the ServerAdmin variable should be real email address and the ServerName should be the domain name of the server. Also, ensure that the DocumentRoot is correct. If you only want to use MediaWiki, you can set the DocumentRoot to /var/www/html, but you have to modify a step below as well.

Restart everything

Do not restart the server yet! Instead, restart the key services.

sudo a2ensite mediawiki.conf
sudo a2enmod rewrite
sudo systemctl start apache2
sudo systemctl enable apache2
sudo systemctl start mariadb
sudo systemctl enable mariadb

Download the current MediaWiki source

From the MediaWiki site, make sure you have the correct version. As of this writing, it is: mediawiki-1.33.1

Change to a temporary directory, download, untar, and move the file to the web server:

wget https://releases.wikimedia.org/mediawiki/1.33/mediawiki-1.33.1.tar.gz
tar zxvf mediawiki-1.33.1.tar.gz 
sudo mkdir -p /var/www/html/mediawiki
sudo mv mediawiki*/* /var/www/html/mediawiki

If you modified the DocumentRoot in the Apache configuration to /var/www/html, you will need to modify the command above. You will only need to move the contents of the base mediawiki folder:

sudo mv mediawiki*/* /var/www/html

Point your browser at the web site

Depending on your confirmation you can either use localhost or the hostname of your server. If you use the mediawiki folder option, you have to put the folder on the end.

http://<hostname>/mediawiki

Good luck!

Web Links

MAX and Human Errors

What Really Brought Down the Boeing 737 Max? - The New York Times

In the drama of the 737 Max, it was the decisions made by four of those pilots, more than the failure of a single obscure component, that led to 346 deaths and the worldwide grounding of the entire fleet.

I am not a pilot, and I have never been at the controls of an airplane. This very long article does go into a number of issues surrounding a complicated piece of technology. Take a read. It does not take any responsibility off of Boeing, but it certainly does not make them out to be the only villain in the story.

That is a big hole

Scientists have discovered deepest point on land | WTOP

The trough is about 3.5 km (about 2 miles) below sea level but there is no ocean water there. Instead, it is filled with ice flowing from the interior of the ice sheet towards the coast. The trough measures about 100 km in length and is 20 km wide, according to the study.

Think about how long 100 km is. The District of Columbia is 16 km on a side. According to Wikipedia, 100 km is 9/10 as long as the English Channel and not quite as wide as the narrowest point. And it is on land! Think about that with your morning coffee.

Review of the YSmart TIPEN

YSmart introduced a new pen on Kickstarter earlier in 2019 (and it is now available to purchase on Indiegogo. Since I have not met a pen that I did not like, especially one made out of virtually indestructible, go anywhere metal, I bought into the program. My pens arrived this week and here is my review.

First, this pen is tiny. I mean really, really small. For comparison, in the image above, we have the YSmart TIPEN resting against the ruler, end to end it is barely 2 inches long. For scale, above we have the Fisher Bullet Space Pen, a basic black marker, and a standard, freshly sharpened number 2 pencil. However, uncapped, the pen is even smaller.

Unlike the Bullet pen, which will take its cap on the back, and gives you an extra inch or so, the TIPEN cap will not fit on the back, leaving the pen at 2 inches. For those of us with long fingers, this becomes a bit of a problem when writing, especially if you are used to resting the pen against your finger.

This also impacts the quality of your writing, especially over time. You will not be writing long epistles with the TIPEN, but it is useful to have around for quick notes and shopping lists.

The ink is similar in feel to the Fisher refill, which is why I chose it for comparison. It is not a ballpoint ink per se, nor is it a gel ink (my preferred ink in non-fountain pens). It writes smoothly and with no skip once started. YSmart claims additionally that the nib is unbreakable and suitable for opening packages, paint cans, and non-writing functions.

For an EDC pen, it would not be my first choice. Despite its slightly larger size, I would select the Bullet pen, or its brother, the Trekker pen with a key chain ring attachment, but for an emergency pen, the TIPEN is a good choice. You can put it on your key ring and forget about it until you need it.

I Am Not Filling Out Your Survey

And if I do, you won’t like the outcome.

Grade inflation, or rather star inflation, is rampant in online shopping. I blame Amazon. But it has gotten so carried away that everyone from the checkout clerk at the grocery store to the guy that sends me simple screws is asking for my review of their performance, or their product. Let’s face it; I do not have the time.

If you keep insisting, I am going to grade you precisely the same way I graded my employees. I used to work for a company with a hideous annual review process. If you did your job, you got a three out of five. If you aspired for a higher grade, there were strict criteria. To get a four, you had to be recognized as an expert or leader by other people in the company. We had over seven hundred people. To be recognized in this way, it rarely happened. We used to say you had god on speed-dial. To get a five, you had to be recognized as a leader in your industry, god had you on speed-dial.

I am going to take the same tact with surveys. If it does the job, you get a three. To get more stars beyond that, you are going to have to rock my world. Second, if you sell me a product, do not ask me for my initial opinion. I probably have not had time to use it effectively. Ask me again in another month. Or another six months. That will allow me to evaluate your product correctly. Frankly, unless it changes my world, you will still only get three stars. The number of things that have risen to that level through my life is so minimal that it cannot be counted.

Just stop asking.

When Privacy and Reality Interconnect

His privacy being paramount, Kelly grudgingly chooses to head into Columbia every so often, rather than cede his data to Google or turn over his purchase history to another online retailer. “I’m just not sure why Google needs to know what breakfast cereal I eat,” the 51-year-old said. Washington Post

There are a couple of things to notice here.

First: Google is not the only company out there snarfing up your data. Zuckerbergland apps, Verizon (you know, AOL, Yahoo, Tumblr), Microsoft (Linkedin, Bing, all those Microsoft apps like Word, etc) are only some of them.

Second: Most websites have some form of tracking software on them, and they can be related to any of the three or more listed above.

Third: Despite what the EU would have you believe, GDPR is not your salvation, as many websites, in the small print, outside the EU say this site not intended for consumption by people in the EU which means that the GDPR has zero impact.

And realistically, if you do not want to be tracked, there is only one way to avoid it. Stay off the Internet. And that includes no smart devices (there is tracking software on them too), no credit cards (who do you think came up with the idea of tracking purchases), and no cheques. In fact, depending where you live, you are being watched by CCTV cameras, where the video is uploaded and searched for malcontents, using AI and facial recognition software. If you travel, you are tracked whether by planes, trains, or automobile (toll plazas, rest stops…). Let’s face it, unless you are a hermit, you have no privacy.

And ironically, we all know that Mr Kelly, who is 51 years-old, likes to eat Bob’s Red Mill muesli cereal. So his privacy is now shot too, because he talked to a reporter, and the story ended up…on the Internet.