Once upon a time...back when the Internet was young. And I mean most people still did not have an email address, much less email addresses for their various roles, some legal eagle decided to put a footer on their emails indicating they were privileged information and that if you were not the intended recipient you should destroy it, etc. You may have seen them, and they are still kicking around, depend on who you do business with and what their job is. It has been generally agreed they are not legally binding, much less even enforceable, but if it makes you feel better when you write highly confidential information on a postcard, knock yourself out.
Today, on a very colorful spam message, saying I was awarded a scholarship to a real US University I neither applied to nor that the message has any actual affiliation with, the following legalize was attached:
Disclaimer: The information in this email and any attachments contains proprietary, confidential and legally privileged information and therefore is intended only for the person(s)/recipient(s) named in the message header. Further all such proprietary, confidential and privileged information is owned by Brain4ce Education Solutions Private Limited operating through its brand Edureka.
If you are not the addressee/recipient of this message, you shall not copy, forward, print, distribute, disclose or use any part of this message. If you have received this message inadvertently, please immediately notify the sender about the same and delete this message and all copies from your system.
Yes, the spammer is fronting for a US University but is really claiming to be Edureka, which based on the Internet reviews is either a full up scam or valuable use of money. Since the document itself is so full of bogus links and contains an Indian call back number (no US Institution would ever do that), I have to say the whole thing is pretty much a giant scam and I will not be sending them my $1250 USD, even though I was awarded a scholarship. I really need a new NAS drive anyway.
Admissions Office: Academic Year 2024
To, Selected Learner / Candidate
This is to inform you that, you are selected for the special scholarship for Post Graduation Program in DevOps Engineering given by [Respectable US] University, USA and the selection is done based on your profile and various other factors which our joint committee decides for this program by [Respectable US] University at Edureka.
You are required to enroll on or before the 23rd of April 2024 to utilize the special scholarship.
Now, you have the opportunity to study Cloud and advanced DevOps from [Respectable US] University. And upgrade your knowledge through a challenging curriculum.
I believe in the soul, the cock, the pussy, the small of a woman's back, the hanging curve ball, high fiber, good scotch, that the novels of Susan Sontag are self-indulgent, overrated crap. I believe Lee Harvey Oswald acted alone. I believe there ought to be a constitutional amendment outlawing Astroturf and the designated hitter. I believe in the sweet spot, soft-core pornography, opening your presents Christmas morning rather than Christmas Eve and I believe in long, slow, deep, soft, wet kisses that last three days. -- Crash Davis, Bull Durham
I once explained to a program manager the principles of software development with a plate of spaghetti. I am not sure he was convinced, but it got me thinking that a more expansive explanation might clarify things. And one of the nice things about this process is we will have a tasty dinner when we are done.
Let us dive in.
First, we must define some terms (and try not to stretch the analogy too much).
Terms
Ingredients When we talk about ingredients, we mean the raw materials that go into our recipe2. There are several types of these ingredients when we talk about software. First is the actual source code. Think of this as the raw materials. Secondly, we have third-party libraries. Some shops split these into two pieces: the unaltered and the altered libraries3. Finally, there are the pre-built components delivered by other teams. In the case of our recipe for Fettuccine, all our ingredients are pre-built and produced4 from different places, so we have to trust their quality control.
Directions The directions are the instructions on how we put the ingredients together to get what we want. In software, these are our Makefiles5. Just because we have flour, eggs, and oil, it does not guarantee that we will get pasta out of it6 unless we follow the directions. The directions also include additional information for plating and delivery, much like software packaging and delivery. Sometimes this information is included in the base repositories. Sometimes, it is carried separately.
Quantities You will note that our ingredient list includes specific quantities of this or that. If you need to make more, you increase the amount. If you want less, you decrease. In the software world, this is analogous to compiler flags and switches that control various desired outcomes for memory usage, pointers to gateways, and other preconfigured endpoints.
Tools and Equipment Any good chef has their preferred tools. These are their knives, pots, pans, stoves, and fuel sources. The same is valid for software. Build servers, test fixtures, and artifact storage are only some of the tools that will come in handy when you build (cook) your software7.
Environment At the end of the day, you need to plate (deliver) your meal. Are you cooking for the family? Cooking to test the recipe? Cooking for a charity meal? These are different environments, even if the end product is the same. The same is valid for software. Regular releases to see if it works would be development, validating that it tastes good would be quality assurance, and serving eight for a charity meal would be production, for example.
The Ingredients
After we initialize our repository (git init), we check in our ingredients (or get out some dishes) with a git commit (or some measuring devices). If we are going to make Fettuccine Alfredo, we need the following ingredients:
Once we have committed our code to the repository and the automated tests (TDD/BDD)11, along with the other build instructions, we are ready to start cooking12.
The Directions
Now we turn to the build server and push our code into it, and if our instructions are good, we get something useful out the other side. In this case, we will take our pasta ingredients and build some fettuccine noodles.
Pasta
To do this, we take our ingredients and follow the directions. You will want to wash your hands before we begin. Think of it like linting your software. You want to be hygienic.
Place the flour in a mound on a wooden or plastic surface (marble will chill the dough and reduce elasticity). Make a well in the flour and break the egg into it. Add a drizzle of olive oil and a pinch of salt to the well.
Start beating the eggs with a fork, pushing up the flour around the edges with your other hand to make sure no egg runs out, and pulling the flour from the sides of the mound into the eggs.
When you have pulled in enough flour to form a ball too stiff to beat with your fork, start kneading the dough with the palm of your hand, incorporating as much of the flour as you can. You will have a big ball of dough and a bunch of crumbles. Push them aside and scrape the surface clean with a metal spatula.
Sprinkle the surface with more flour, place the dough on it, and knead by pushing it down and away from you, stretching it out. Fold the dough in half and continue pushing it down and away. Keep repeating this action until the dough no longer feels sticky and has a smooth surface. It should take about 15 minutes.
Cut the dough into four pieces. Wrap in plastic the pieces you are not going to work on immediately. Makes about 1 lb.
But this is only the first part of the build. And there are some tests built into the process of making the dough. Validation tasks like like ball too stiff, crumbles on the side, and smooth surface. Just like building software, these unit tests help you evaluate the quality of the dough, just like unit tests help us assess the quality of the code early. By doing unit tests, you can quickly ascertain the quality of the code and fix any issues before they become too hard to resolve. We call this shift left.
Our wrapped dough is now an artifact, and we can check it into our repository (put it on a shelf or in the refrigerator, depending on how long you expect to process the first piece).
But a blob of dough is not a finished product, so we need to process it a bit more.
Make the Fettuccine
Making Fettuccine by hand is a time-consuming process. You can do it manually or use a machine (automated). Let’s roll it out.
Personally, I prefer to use a machine. My build server is a Kitchen Aide with a pasta roller attachment. You are going to need some additional items:
Parchment paper
Extra flour
Lots of extra space to lay out the pasta to dry
Set your machine to its widest setting and run one piece of dough through it two or three times. Flour the dough lightly if it starts to stick. Fold the dough into thirds, reduce the width by one setting, and start again. You will continue reducing the width each time. You may need to cut your dough if it becomes unwieldy (it will). Place it between parchment when you are not actively working with it. I find that on my machine, for fettuccine noodles, I have to squish it down to setting 5. You may have different results.
Again, like software, as you manipulate your package, you will have to experiment. What is the expected outcome? How do you adjust the flags and configurations to achieve that desired outcome? Until you have done it once or a dozen times, it requires experimentation (development).
You can create your pasta by hand or use a machine. Again, I find the machine to be more helpful. Once you have the dough to the desired thickness, cut it into strips, set them on the parchment, and set them aside to dry. You can also trim them to the same size or square off the bottoms, or, if your timing is right, you can toss them in a pot of boiling water and cook it.
I usually make my pasta the day before, so I let it dry before cooking. Again, we have an artifact for our repository. In this case, a finished set of pasta noodles (binary) is ready for the next step in the process.
The Alfredo Sauce
Fettuccine Alfredo is a two-step process. The pasta (repo pasta) and the sauce (repo sauce). Or if you are using packaged pasta (archive pasta). You can use packaged sauce too, but we will do it from scratch. I think it tastes better.
First, we must cook the pasta (build cooked pasta). Bring a large pot of water to a boil.
We must also get the sauce going (build sauce). Bring the cream and butter to a boil in a large saucepan over high heat. Reduce the heat to low and simmer for about one minute. Add six tablespoons of the grated Parmesan and whisk over low heat until smooth, about one minute longer. Remove from the heat and season with the salt and pepper to taste with a generous pinch of nutmeg. (Be judicious with the salt. Parmesan is salty enough by itself when it is fresh).
Generously salt the boiling water, add the pasta, and cook until al dente, 1-3 minutes. Drain the pasta well.
Put the pasta in a warm, large, shallow bowl. Pour on the sauce and sprinkle with more cheese. Toss well and serve immediately.
Again, we have several checks along the way (test steps) that we can validate.
Delivery (Plating)
The instructions serve immediately do not tell us anything about how we are going to serve our meal.
If this was a development environment, we might just scoop a forkful out of the bowl and see how it came out. For a quality check, we might serve it to our family by scooping it out onto the day-to-day plates, putting it on the table, and pouring a lovely Chardonnay to go with it. But for production, we might get out the good china, serve each guest their own and ask them if they want more cheese on top.
It is important to note regardless of how we plate (deliver) our meal, there are no changes in the basic ingredients or build process. How it is delivered, small batches to validate, larger batches for user acceptance or in quantity for eight, is the same.
And that is what is essential about software delivery. Regardless of the environment we deliver to, we must use the same ingredients, tests, build and deploy processes each time.
This ensures that what we deliver is the same each time. Of course, when you cook, some variables impact the outcome. A pinch of salt might be larger or smaller each time. This is not the case in software development, where every measure is the same, and there are no variations between one build and another unless you change the underlying recipe.
Recipes are nothing new to software development. The Orchestration tool Chef long used recipes to describe their installation procedures and other culinary aspects to define their tools and other processes. ↩
The difference is using the thrid-party library without making changes, and the altered libraries have some change made to them after they have been downloaded. In some cases this could be a patch, in others, this could be a change in the underlying source code. ↩
Pre-build in this case means someone else is providing them to us. I am not a wheat farmer, so I am not growing farina in the back yard. Similarly, I am procuring my eggs and oil from somewhere, and I am certainly not in Italy, so my cheese is made by yet someone else from their list of ingredients. ↩
In fact, if we add yeast and water, we will get bread. ↩
Build servers include Jenkins and Team City, test fixtures include SonarCube, and Jfrog is one of the many artifact storage engines out there. ↩
If you are not going to make your own, use a good quality fettuccine noodle, the fresher the better. ↩
Sometimes called 00 Farina or Semolina flour. You can use any good quality flour you have if you do not have pasta flour. ↩
If you can grate it off the block, so much the better. ↩
Test-driven development (TDD) is the unit tests and linting that should precede any code creation. Behavior-driven development (BDD) is the functional (acceptance) tests need to close out the story and accept it. ↩
We will discuss automated testing methodologies in another post. ↩
1 lb boneless, skinless chicken breast or thigh, cut into 1 inch cubes
2 tbsp peanut or vegetable oil
8 to 10 dried red chilies
3 scallions, white and green parts separated, thinly sliced
2 cloves garlic, minced
1 tsp ginger, minced or freshly grated
1/4 cup unsalted, dry-roasted peanuts
Marinade
1 tbsp soy sauce
2 tsp Chinese rice wine or dry sherry (do not use Mirin - Japanese rice wine)
1-1/2 tsp cornstarch
Sauce
1 tbsp Chinese black vinegar or good quality balsamic vinegar
1 tsp soy sauce
1 tsp hoisin sauce
1 tsp sesame oil
2 tsp cornstarch
1/2 tsp ground Sichuan pepper
The Directions
Marinate the chicken: In a medium bowl, stir together the soy sauce, rice wine, and cornstarch until the cornstarch is dissolved. Add the chicken and stir gently to coat. Let stand at room temperature for 10 minutes.
Prepare the sauce: In another bowl, combine the black vinegar, soy sauce, hoisin sauce, sesame oil, sugar, cornstarch and Sichuan pepper. Stir until the sugar and cornstarch are dissolved and set aside. NOTE: If you like a saucy sauce, you can scale up the ingredients. This is a dryer sauce at this ratio.
You may need to turn on your stove's exhaust fan. Stir frying chilies on high heat gets a bit smokey!
Heat a wok or large skillet over high heat until a bead of water sizzles and evaporates on contact. Add the peanut oil and swirl to coat the base. Add the chilies and stir-fry for about 30 seconds or until the chillies have begin to blacken and the oil is slightly fragrant. Add the chicken and stir-fry until no longer pink about 2 to 3 minutes.
Add the scallion whites, garlic, and ginger and stir-fry for about 30 seconds. Pour in the sauce and mix to coat the other ingredients. Stir in the peanuts and cook for another 1 to 2 minutes. Transfer to a service plate, sprinkle the scallion greens on top, and serve.
One of the least automated functions in software is the realm of numbering releases. Often, these numbers are functions of a marketing department, trying to impress customers rather than actually following any set standard.
In the movie Tron: Legacy, the character Alan Bradley, a former programmer, now executive, asks the president of ENCOM:
Given the prices we charge to students and schools, what sort of improvements have been made in Flynn... I mean, um, ENCOM OS-12?
And the president replies:
This year we put a "12" on the box.
Ironically, this is not as far from reality as you might expect. But suppose you are responsible for a software release. In that case, you need a version control numbering system that the engineering department can rely on, even if you need to maintain a translation grid somewhere else (hint: You probably will need one).
Version Number Basics
Numbering schemes vary by company but generally follow a template similar to this:
XX.YY.ZZ.AA
Where:
XX is a major release version number
YY is a minor release version number
ZZ is a patch release version number
AA (AAA) is the daily or latestrelease version number
The Daily or Latest release (AA or AAA)
The most automated of the version numbers is the daily or latest stable release number. If you use a build system, the last number is the build number. Every time code builds, the last number increments automatically, whether the build is successful or not. When the build is successful, usually by passing all automated tests, it is considered the latest stable build. This build is usually not released, but teams use it to develop additional features or patches or validate release candidates. While this build usually comes out of the development branch (in a three-branch system), nothing prevents it from coming from other automated build branches.
This number may or may not reset if one of the other three numbers is incremented based on the release management policy.
Many artifact repositories will alias the newest stable build as latest.
The Patch release (ZZ)
The most frequently updated public version number is the patch release number. This indicates changes to the base code that all users should take and update. This may include specific spot releases or roll-up patches from daily releases that are now regression tested and ready for everyone.
The patch release should not include new functionality. This release is a bug fix, major or minor, only.
The Feature Release (YY)
As an application matures, new features are added, and these features do not break backward compatibility but extend existing functionality or introduce new functionality. These releases tend to be less frequent than patch releases.
The Major Release (XX)
When an application does a major release, we discuss new features and functionality that break backward compatibility with prior versions. These big releases might include new kernels, schema designs, or APIs that no longer talk to older versions.
Major versions are often entirely new code bases, redesigned UIs, or tuned to specific hardware or changes to third-party requirements for API interactions. Depending on the software, it is usually possible to run two major versions simultaneously (although you probably do not want to).
Internal versus External Version Numbers
As mentioned, version control can be impacted by marketing. It is common, though bad practice, for the marketing version number not to be updated when a system is patched, even if the internal version number is updated. Sometimes this is to obscure the patch, sometimes, it is to prevent confusion with marketing campaigns and sometimes, it is contractual.
It is vital to ensure that regardless of the marketing number, you keep tight control of the internal numbers, incrementing them per build, patch or feature merge, and, finally, major releases. These numbers should result in tagging in the version control system, and all software that ends up in the artifact repository should be clearly identified.
A couple of months ago, I found a program called Mermaid, which is a javascript framework for translating markdown into process diagrams, and other relationships diagrams. This is a really cool thing.
Most of us, at some point in their career has been forced to create a process diagram of some kind. Or maybe its an Entity Relationship Diagram (ERD) or diagram a user journey. In many cases, if it is a simple diagram, you will open your PowerPoint analog program and hope you can make the boxes and arrows do what you want them to do. For more complicated diagrams, you probably opened a custom diagraming tool, like Lucid, or Miro, or Visio.
The largest problem with all of these tools is keeping the diagram current. The second problem is sharing the document. While many people think everyone runs Microsoft Office, the reality is that Office, and its analogs are falling behind the state of the art. Producing documents on tablets and phones need lighter weight applications, and what is lighter weight than raw text?
Markdown, properly parsed, can create elegant, multi-platform documents that can be managed in standard version control systems without a large amount of syntactic overhead associated with even the lightest weight outputs from applications like Word. Markdown supports PDFs, web pages, scripts, tables, and now, with Mermaid, complex diagrams!
Because of the various levels of support in browsers, and operating systems, there are many different ways to parse the data so everyone can see it. Let’s look at an example.
I took an existing process diagram that I created in Lucidchart and translated it into Markdown, and the code looks like this:
flowchart LR
A[Rally Stories & Tasks] --> B[Developer Creates Code]
B --> |GitHub Enterprise| C{Security Scan}
C --> |Scan Fail| A
C --> |Scan Pass| D[Build Unit Artifact]
D --> E{Automated Tests incl TDD-BDD-Smoke}
E --> |Pass| G[Automatic Build of Materials]
E --> |Fail| F[Teams Alert for SM]
F --> A
G --> H[Deployment to End-to-End]
H --> I{Automated Tests incl TDD-BDD-Smoke}
I --> |Pass| J[Artifactory-Promotable]
I --> |Fail| F
Using the Mermaid Markdown parser (available from the Mac or iOS store), you get a simple diagram that looks like this:
But what is really nice, again, depending on your browser support, and underlying OS, if I embed the code above and wrap it in tags, you will get the embedded diagram (above) or you will get raw text, like the code block above (at least in WordPress). What is really nice, is if you check it into Github (change the wrappers), you have live diagrams, that are in version control and can be easily updated as the project or goals change. No more passing around files that cannot be edited, or can only be edited by one person. No more asking is this current. And that makes knowledge transfer easier.
Depending on your browser and OS, what is below is either the diagram, or the code.
flowchart LR
A[Rally Stories & Tasks] --> B[Developer Creates Code]
B --> |GitHub Enterprise| C{Security Scan}
C --> |Scan Fail| A
C --> |Scan Pass| D[Build Unit Artifact]
D --> E{Automated Tests incl TDD-BDD-Smoke}
E --> |Pass| G[Automatic Build of Materials]
E --> |Fail| F[Teams Alert for SM]
F --> A
G --> H[Deployment to End-to-End]
H --> I{Automated Tests incl TDD-BDD-Smoke}
I --> |Pass| J[Artifactory-Promotable]
I --> |Fail| F
I had the opportunity to teach last month. One of the topics I covered was the cloud as an environment and as a platform. One of the most significant issues I had conveying the information was a general lack of understanding of just what comprises today's platforms and an incomplete understanding of just how it all works. I ended up describing in detail the process of sending data from one machine to another, regardless of whether that machine was a phone or a computer and how it traversed the network, be it cellular or physical cable. I thought this explanation might benefit others.
ISO OSI layer model
Before we can discuss the process, we have to understand our stack. In this case, it is the standard ISO Open Systems Interconnection (OSI) model. The model, from top to bottom, looks like this:
Layer 7 - Applications Layer
Layer 6 - Presentation Layer
Layer 5 - Session Layer
Layer 4 - Transport Layer
Layer 3 - Network Layer (sometimes called the routing layer)
Layer 2 - Data Link Layer (sometimes called the switching layer)
Layer 1 - Physical Layer
It is essential to understand what happens at each of these layers from a theoretical perspective, especially if you have responsibilities to debug a problem in your environment. It is also essential to recognize that it is a model of how data flows through the system. Certain aspects of the model might be bypassed by specific applicational or protocol purposes. But generally, if you understand this model, the rest of the process will flow from here.
In brief:
Layer 1 - Physical
The physical layer is responsible for the transmission and reception of unstructured raw data between a device and a physical transmission medium such as ethernet (Cat 5/6 copper cable) or any of the various forms of fibre (used in both network and server to data transfers) or coax (used primarily for long haul and building connections). It converts the digital bits into electrical, radio, or optical signals. It is the cables that push data between servers and between servers and storage. Bluetooth can be thought of as a physical layer connection, although it did not exist when the original model was developed. X.25 is one of the earliest protocols developed to support the physical layer.
Layer 2 - Data Link Layer
The data link layer provides node-to-node data transfer—a link between two directly connected nodes. Here we begin to talk about the frames of a data packet and the establishment of both a medium access control (MAC) layer, where devices gain access to the network layer protocols, and the Logical link control (LLC) layer where encapsulation and error checking and frame synchronization begins. This is where we see Ethernet standards applied, WiFi standards, and the old Point-to-Point (PPP) standards appear.
Layer 3 - Network Layer
The network layer provides the functional and procedural means of transferring packets from one node to another connected in different networks, effectively routing packets from one network to another, with intelligence. When we talk about routing protocols, we hear terms like EGIRP (Cisco proprietary) and OSPF. Older protocols include RIP. IPSec also happens at layer 3.
Layer 4 - Transport Layer
The transport layer provides the functional and procedural means of transferring variable-length data sequences from a source to a destination host while maintaining the quality of service functions. We start talking about the size of a data packet in the network (or frame size). The standard for a TCP packet is 1500 bytes in length with a payload of about 32bits of the total frame. Packets larger than this size may be transmitted if all the routers and switches in the network agree to it, but if you are connecting to the Internet, that is all you get. As a result, large data transfers segment their data into many(!) TCP packets.
This layer is also responsible for flow control but not for reliability. That is the responsibility of the protocol. TCP, as a protocol, is chatty. It acknowledges each packet sent and received, thus ensuring reliability. UDP is what we call an unreliable protocol. If something transfers over UDP, there is no guarantee mechanism or message to ensure it arrived successfully.
Layer 5 - Session Layer
This layer is responsible for dialog control. It establishes, manages, and then terminates the connections in either full-duplex, half-duplex, or simplex. This is also where session checkpointing occurs, such as with remote procedure calls (RPCs). As part of this model, the session layer is purely theoretical, and in practice, the Session Layer is integrated into other parts of the TCP/IP stack.
Layer 6 - Presentation Layer
The presentation layer is the context switch between application layer items. This is where mapping occurs (if needed), encapsulation (TLS), or other processes to move data up or down the communication stack.
Layer 7 - Application Layer
This is where the user generally interacts with the application. Where Graphical User Interfaces are drawn and displayed and where input and output occur.
While this is all grossly simplified, it highlights several steps where things can go wrong. Still, it also highlights why you need to know how your application will intersect with the various subsystems below it and what that impact might mean in terms of resource allocation, consumption, and application performance over time.
The Server Farm
The cloud is nothing more than many servers, working together, with or without some form of storage. Technically, today, the cloud is more about the application you are interacting with and less about where the environment the application runs is located or how it is constructed. It is crucial to understand, at least in theory, what the application is doing, where it lives, and why it all works the way it does1.
Companies that provide cloud services (like Amazon Web Services and Microsoft Azure) maintain large data centers, essentially warehouses, filled with equipment. A lot of equipment. Most of this equipment lives in racks nineteen inches across and 42 U (a technical term) in height2.
A typical rack includes:
The rack (and screws, do not forget the screws)
A power source (most are DC powered in data centers)
The servers will compose most of the space in the rack, so anywhere from 38 servers (at 1U) to 9 servers (at 4U). Large servers tend to be reserved for more specialized purposes, like running database platforms within the environment (think Amazon Redshift or Azure SQLServer).
In other parts of the datacenter, you will find racks dedicated to storage, routing, and other operational requirements (like the Amazon Marketplace) or CI/CD for host management. Most data centers do not run at 100%. At any time, a host (or a rack of them) will be down for maintenance or replacement, hard disks need tending, and wires and cables sometimes need to be repositioned to add more capacity or change it.
But there is still one more layer of abstraction that needs to discuss. It is unlikely you will run your application on a server directly, what we call bare metal. In most cases (all cases for AWS or other commercial cloud vendors), you will run in a virtualized (guest) space on the host OS (the OS on the bare metal). Such hosts include VMWare's vSphere, or Microsoft Hyper-V, or Linux's KVM. Through magical trickery4, the host mirrors the bare metal for the guest OSs and depending on the resources of the bare metal, and you can run multiple guests per server. With shared storage, you can interconnect storage across various hosts. This becomes the power of the cloud.
When is the cloud a platform?
Using the above figure as a reference, up to this point, we have discussed the traditional server farm model, whether that model has all components in a single rack or a series of racks. In this case, you are responsible for managing all aspects of the environment, including the power and cooling you need to keep it functioning, along with the personnel to run it. This is not the cloud.
When we discuss Infrastructure as a Service (IaaS - second column), it becomes a cloud environment. It is at this point where the management of the environment shifts and splits. The company providing the infrastructure (such as Amazon) is responsible for all aspects of that environment up to the (guest) operating system level. From that point onward, the customer is responsible. In most cases, this means they are responsible for the security and patching of the guest and all applications running on and interacting with the guest. This may be a single host (guest) or a whole farm of interconnected hosts with containers, databases, and storage. In essence, you are purchasing traditional (albeit virtual) infrastructure from a provider. The line between cloud and traditional networks is blurred, and it is easy to confuse the two. Just remember, if you are using Infrastructure as a Service, there are still many subsystems you are not managing, nor are you responsible for keeping current.
It becomes a platform when you move into Platform as a Service (PasS). The application is where you interact, and the only responsibility you have is for the application and its associated architecture. Any patches or updates to the OS, the databases, etc., are the cloud provider's responsibility. It may require you to adapt your application based on changes in APIs or associated calls to the middleware. Still, those generally are small changes advertised well in advance of any lower-level updates. You can also reduce the skillset you need to have on staff. Typically, you are only developing your application. Most software companies are at this level if they are developing their own applications.
Finally, there is Software as a Service (SaaS). Offerings like Salesforce, Workday, Lucidchart, where you rent application space are SaaS. While you might configure the application or write additional customizations to address gaps, you are not responsible for the underlying platform or application. Updates are delivered to you, with warnings when there are systemic changes you need to account for or prepare for, but you are renting the application and using your data at the end of the day. As the consumer, you do not have to do anything.
Moving the data
One of the reasons for the success of the Internet is the ability to move data from here to there through standard, well-understood protocols. Before the 1990s, most of this was via dedicated communication lines between universities and certain federal agencies. These systems were almost always Unix-based and utilized TCP/IP as the communications protocol. Tim Berners-Lee had not developed the HTTP protocol for sharing documents, no Google for finding sites, and there certainly was not enough bandwidth for video streaming, much less the codecs. Any data transferred went via FTP or email, and security was not high on the list of essential items. Most people had never heard of the Internet. That all changed and rapidly throughout the 1990s and early 2000s.
But there was still the limitation of the physical cable, whether that was in the data center, the business office, or the connection between the house computer and the Internet Service Provider (ISP). Firms like AOL, Compuserve, and Prodigy were the first access (the on-ramps) to the information super highway. Often they came with additional protections and filters that kept people from the bad parts of the Internet. In the middle part of the 1980s, the FCC released the 2.5 GHz bandwidth space for general use (the same frequency range used by microwave ovens and today's 5G). However, it was not until the late 1990s that the first reliable WiFi interfaces were released for public use, primarily by corporations rather than home users.
At the same time, we begin to see high speed, high capacity, high bit rate lines deployed between central offices and the ISPs. Dedicated (T1 & T3) and fractional (Frame Relay, ATM) connections connected business offices to the main office and flowed through increasing large, complicated telco clouds. This further increased the concentration of data within the telco networks.
All of this would be required for the next quantum leap forward - the movement (and capability) to use mobile devices connected to the cellular network. It would take another discussion to cover the technology in the cellular network but go back to our basics. The mobile device is connected to the cell tower, where it gets an IP address (IPv6 in case you are wondering, and that is yet another discussion). Data is then encapsulated on the phone and sent up the wire (cell connection) to the tower, where it is received, translated into bits, and sent to the network. The cell tower acts as part of the physical layer. Modern-day mobile equipment is no different from your laptop or desktop in terms of networking software. Because of this, we are seeing the convergence of mobile and desktop operating systems at a speed that eclipses their initial development.
Conclusion
There are many areas of this process that have been glossed over for complexity sake. For example, the whole discussion of a TCP packet takes up three volumes, starting with TCP/IP Illustrated, Volume 1: The Protocols. Routing and switching is a skill set of its own, and storage management is a full-time job. Then there are the aspects of database management, cellular RF engineering, and the headaches of making good fibre connections (hint, polishing glass is tricky to master).
But with this overview, you should begin to understand the levels of responsibility, and more importantly, the amount of complexity that has been engineered out of the system, primarily if you work at the platform level. Clouds are not free. There is a great deal of work and costs in keeping them operational, even if you are not the one doing it.
In many cases, the location where your application is running or the data is residing is not an issue, but remember that the cloud, especially for large cloud providers, could have data stored in locations that may have legal ramifications for that data. ↩
We call them 19" racks because of the internal space between the screw holes that hold the equipment in place. The outer dimensions of each rack generally are two feet wide by as much as two feet deep, but most servers stick out another foot or so beyond that. Rack as measured in how many rack units high (a rack unit is 1-3/4 inches) and defines how much equipment it can hold. ↩
These are often called Layer 3 switches, which combine the features of a Layer 2 switch functionality and the routing capabilities of a traditional router. ↩
We could spend another hour discussing the various coding games played by the kernel and the requirements in the CPU for virtualization to work. ↩
I have long said that many of the tools and practices used in DevOps are suitable for Legacy software development. Their use should be encouraged. But beware. Just because you apply some DevOps and Agile practices, it does not mean you are doing DevOps. For example, just because you have automated builds in a pipeline, it does not mean you are doing DevOps. DevOps (and Site Reliability Engineering) require particular aspects of Agile to succeed. Have a stand-up? Do you actually stand up? Is the stand-up less than fifteen minutes? Do you only cover what you did yesterday, what you plan to do today, and list your blockers?
I have witnessed a few things over my career at companies that do DevOps, but no, not really. I will update this as more come to light. I am sure more will come to light.
With apologies to Jeff Foxworthy.
If you complain the Agile ceremonies are taking too much time out of your day for coding, you’re not doing DevOps.
If your stories don’t fit into your sprint, you’re not doing DevOps.
If getting a team to look at an issue requires several manual emails to more people than have appeared on Survivor, you’re not doing DevOps.
If you are outcome-driven rather than output-driven, you’re not doing DevOps.
If you have to have a laundry list of features that have to be finished before code freeze, you’re not doing DevOps.
If your features sit unused for months before they are implemented into production, you’re not doing DevOps.
If your shortest scrum of scrum meetings gave you a minute back in your day, and it was only 59 minutes long, you’re not doing DevOps.
If your API gateways are tracked manually, updated randomly, and require multiple teams to update, you’re not doing DevOps.
If your API gateways are stored in version controls, but you have to update your code manually, you’re not doing DevOps.
If your automated deployment process requires an email to be manually sent to more people than a Major League Baseball team to start the smoke test, you’re not doing DevOps.
If you track build release numbers manually on a spreadsheet or a wiki page for each environment, you’re not doing DevOps.
If they changed the password again, and we have to get it from… is a common refrain in your RCA meetings, you’re not doing DevOps.
If your request to refresh test data is met with a we don’t have the time, response, you’re not doing DevOps.
If you run out of disk space because the application or system is not monitoring and alerting, you’re not doing DevOps.
If running out of disk space causes your application to fail, you’re not doing DevOps.
If you are not using elastic environments, you’re not doing DevOps.
If the term self-healing is met with blank stares, you’re not doing DevOps.
If the phrase yeah, we noticed that bug in development too, is common during RCAs, you’re not doing DevOps.
If the phrase worked in the lower environments, is common during RCAs, you’re not doing DevOps.
If another team is writing the unit tests and behavioral tests (TDD/BDD), you’re not doing DevOps.
If you have to set up a reminder to replace or renew your certificates, you’re not doing DevOps.
If you have to call two dozen people and wait an hour for a pull request to production, you’re not doing DevOps.
If your security system worked in production, and development, but no longer works after your release, and you made no changes to the code, you’re not doing DevOps.
If a minor upgrade to production requires a meeting of more people than the cast and crew of Game of Thrones, you’re not doing DevOps.
If the first step in updating your applications is turn off the web server, you’re not doing DevOps.
If a minor update to production requires you to shut down for a day and notify the business that their systems will be unavailable, you’re not doing DevOps.
If you are more worried about data backup procedures than being down for a day, you’re not doing DevOps.
If you are more concerned with your roll-back plan than you are about being down for a day for a minor upgrade, you’re not doing DevOps.
If you need to take a day to do a minor upgrade, you’re not doing DevOps.
If you build it but do not release, you’re not doing DevOps.
On Tuesday, October 18, 2021, on the passing of Colin Powell, 45 issued this statement:
Wonderful to see Colin Powell, who made big mistakes on Iraq and famously, so-called weapons of mass destruction, be treated in death so beautifully by the Fake News Media. Hope that happens to me someday. He was a classic RINO, if even that, always being the first to attack other Republicans. He made plenty of mistakes, but anyway, may he rest in peace! (Washington Post Daily 202)
Not to worry there Donnie boy, the reports of your passing will be less glowing. You have not done anything nearly as important.
A conservative Florida radio host who was dead-set against taking a coronavirus vaccine is now dead. Marc Bernier died Saturday of COVID-19 after a three-week battle, his bereft radio station announced. He was 65. (Daily News)
As of today, Monday, August 30, 2021, more than 630,000 people in the United States have died from COVID-19, and there are some 38 million reported cases, yet barely half of the United States has been vaccinated. (NY Times) The 14-day rate change for deaths alone is at +96%, with a daily average of 1200 deaths. An estimated 4.1 million people around the world have died from this disease and the current epicenter is the United States.
Yet, individuals like Bernier and his ilk, male and female, continue to deny there is anything to see here. Or that it’s a scam or some federal indoctrination program that can be avoided or cured with Vitamin C and aspirin. Or veterinary dewormers.
If you do not want to get vaccinated for whatever crazy ideal you feel is worth dying for, that is your prerogative. But please, for the love of humanity, stop spouting incorrect, misleading, or flat-out distorted opinions. Too many have died, too many are sick, and too many are struggling to get through their day taking care of those who otherwise might have lined up for the vaccine.
Oh, and the reactions:
Longtime radio show guest and Volusia County Sheriff Mike Chitwood was gutted after learning that the host had died…
We kindly ask that privacy is given to Marc’s family during this time of grief.