Use containers for Go development (2024)

Work through the steps of therun your image as a container module to learn how to manage the lifecycle of your containers.

Introduction

In this module, you'll take a look at running a database engine in a container and connecting it to the extended version of the example application. You are going to see some options for keeping persistent data and for wiring up the containers to talk to one another. Finally, you'll learn how to use Docker Compose to manage such multi-container local development environments effectively.

The database engine you are going to use is calledco*ckroachDB. It is a modern, Cloud-native, distributed SQL database.

Instead of compiling co*ckroachDB from the source code or using the operating system's native package manager to install co*ckroachDB, you are going to use theDocker image for co*ckroachDB and run it in a container.

co*ckroachDB is compatible with PostgreSQL to a significant extent, and shares many conventions with the latter, particularly the default names for the environment variables. So, if you are familiar with Postgres, don't be surprised if you see some familiar environment variables names. The Go modules that work with Postgres, such aspgx,pq,GORM, andupper/db also work with co*ckroachDB.

For more information on the relation between Go and co*ckroachDB, refer to theco*ckroachDB documentation, although this isn't necessary to continue with the present guide.

Storage

The point of a database is to have a persistent store of data.Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. Thus, before you start co*ckroachDB, create the volume for it.

To create a managed volume, run :

$ docker volume create roachroach

You can view the list of all managed volumes in your Docker instance with the following command:

$ docker volume listDRIVER VOLUME NAMElocal roach

Networking

The example application and the database engine are going to talk to one another over the network. There are different kinds of network configuration possible, and you're going to use what's called a user-defined bridge network. It is going to provide you with a DNS lookup service so that you can refer to your database engine container by its hostname.

The following command creates a new bridge network named mynet:

$ docker network create -d bridge mynet51344edd6430b5acd121822cacc99f8bc39be63dd125a3b3cd517b6485ab7709

As it was the case with the managed volumes, there is a command to list all networks set up in your Docker instance:

$ docker network listNETWORK ID NAME DRIVER SCOPE0ac2b1819fa4 bridge bridge local51344edd6430 mynet bridge localdaed20bbecce host host local6aee44f40a39 none null local

Your bridge network mynet has been created successfully. The other three networks, named bridge, host, and none are the default networks and they had been created by the Docker itself. While it's not relevant to this guide, you can learn more about Docker networking in the networking overview section.

Choose good names for volumes and networks

As the saying goes, there are only two hard things in Computer Science: cache invalidation and naming things. And off-by-one errors.

When choosing a name for a network or a managed volume, it's best to choose a name that's indicative of the intended purpose. This guide aims for brevity, so it used short, generic names.

Start the database engine

Now that the housekeeping chores are done, you can run co*ckroachDB in a container and attach it to the volume and network you had just created. When you run the following command, Docker will pull the image from Docker Hub and run it for you locally:

$ docker run -d \ --name roach \ --hostname db \ --network mynet \ -p 26257:26257 \ -p 8080:8080 \ -v roach:/co*ckroach/co*ckroach-data \ co*ckroachdb/co*ckroach:latest-v20.1 start-single-node \ --insecure# ... output omitted ...

Notice a clever use of the tag latest-v20.1 to make sure that you're pulling the latest patch version of 20.1. The diversity of available tags depend on the image maintainer. Here, your intent was to have the latest patched version of co*ckroachDB while not straying too far away from the known working version as the time goes by. To see the tags available for the co*ckroachDB image, you can go to theco*ckroachDB page on Docker Hub.

Configure the database engine

Now that the database engine is live, there is some configuration to do before your application can begin using it. Fortunately, it's not a lot. You must:

  1. Create a blank database.
  2. Register a new user account with the database engine.
  3. Grant that new user access rights to the database.

You can do that with the help of co*ckroachDB built-in SQL shell. To start the SQL shell in the same container where the database engine is running, type:

$ docker exec -it roach ./co*ckroach sql --insecure
  1. In the SQL shell, create the database that the example application is going to use:

    CREATE DATABASE mydb;
  2. Register a new SQL user account with the database engine. Use the username totoro.

    CREATE USER totoro;
  3. Give the new user the necessary permissions:

    GRANT ALL ON DATABASE mydb TO totoro;
  4. Type quit to exit the shell.

The following is an example of interaction with the SQL shell.

$ sudo docker exec -it roach ./co*ckroach sql --insecure## Welcome to the co*ckroachDB SQL shell.# All statements must be terminated by a semicolon.# To exit, type: \q.## Server version: co*ckroachDB CCL v20.1.15 (x86_64-unknown-linux-gnu, built 2021/04/26 16:11:58, go1.13.9) (same version as client)# Cluster ID: 7f43a490-ccd6-4c2a-9534-21f393ca80ce## Enter \? for a brief introduction.#root@:26257/defaultdb> CREATE DATABASE mydb;CREATE DATABASETime: 22.985478msroot@:26257/defaultdb> CREATE USER totoro;CREATE ROLETime: 13.921659msroot@:26257/defaultdb> GRANT ALL ON DATABASE mydb TO totoro;GRANTTime: 14.217559msroot@:26257/defaultdb> quitoliver@hki:~$

Meet the example application

Now that you have started and configured the database engine, you can switch your attention to the application.

The example application for this module is an extended version of docker-gs-ping application you've used in the previous modules. You have two options:

  • You can update your local copy of docker-gs-ping to match the new extended version presented in this chapter; or
  • You can clone thedocker/docker-gs-ping-dev repository. This latter approach is recommended.

To checkout the example application, run:

$ git clone https://github.com/docker/docker-gs-ping-dev.git# ... output omitted ...

The application's main.go now includes database initialization code, as well as the code to implement a new business requirement:

  • An HTTP POST request to /send containing a { "value" : string } JSON must save the value to the database.

You also have an update for another business requirement. The requirement was:

  • The application responds with a text message containing a heart symbol ("<3") on requests to /.

And now it's going to be:

  • The application responds with the string containing the count of messages stored in the database, enclosed in the parentheses.

    Example output: Hello, Docker! (7)

The full source code listing of main.go follows.

package mainimport ("context""database/sql""fmt""log""net/http""os""github.com/cenkalti/backoff/v4""github.com/co*ckroachdb/co*ckroach-go/v2/crdb""github.com/labstack/echo/v4""github.com/labstack/echo/v4/middleware")func main() {e := echo.New()e.Use(middleware.Logger())e.Use(middleware.Recover())db, err := initStore()if err != nil {log.Fatalf("failed to initialise the store: %s", err)}defer db.Close()e.GET("/", func(c echo.Context) error {return rootHandler(db, c)})e.GET("/ping", func(c echo.Context) error {return c.JSON(http.StatusOK, struct{ Status string }{Status: "OK"})})e.POST("/send", func(c echo.Context) error {return sendHandler(db, c)})httpPort := os.Getenv("HTTP_PORT")if httpPort == "" {httpPort = "8080"}e.Logger.Fatal(e.Start(":" + httpPort))}type Message struct {Value string `json:"value"`}func initStore() (*sql.DB, error) {pgConnString := fmt.Sprintf("host=%s port=%s dbname=%s user=%s password=%s sslmode=disable",os.Getenv("PGHOST"),os.Getenv("PGPORT"),os.Getenv("PGDATABASE"),os.Getenv("PGUSER"),os.Getenv("PGPASSWORD"),)var (db *sql.DBerr error)openDB := func() error {db, err = sql.Open("postgres", pgConnString)return err}err = backoff.Retry(openDB, backoff.NewExponentialBackOff())if err != nil {return nil, err}if _, err := db.Exec("CREATE TABLE IF NOT EXISTS message (value TEXT PRIMARY KEY)"); err != nil {return nil, err}return db, nil}func rootHandler(db *sql.DB, c echo.Context) error {r, err := countRecords(db)if err != nil {return c.HTML(http.StatusInternalServerError, err.Error())}return c.HTML(http.StatusOK, fmt.Sprintf("Hello, Docker! (%d)\n", r))}func sendHandler(db *sql.DB, c echo.Context) error {m := &Message{}if err := c.Bind(m); err != nil {return c.JSON(http.StatusInternalServerError, err)}err := crdb.ExecuteTx(context.Background(), db, nil,func(tx *sql.Tx) error {_, err := tx.Exec("INSERT INTO message (value) VALUES ($1) ON CONFLICT (value) DO UPDATE SET value = excluded.value",m.Value,)if err != nil {return c.JSON(http.StatusInternalServerError, err)}return nil})if err != nil {return c.JSON(http.StatusInternalServerError, err)}return c.JSON(http.StatusOK, m)}func countRecords(db *sql.DB) (int, error) {rows, err := db.Query("SELECT COUNT(*) FROM message")if err != nil {return 0, err}defer rows.Close()count := 0for rows.Next() {if err := rows.Scan(&count); err != nil {return 0, err}rows.Close()}return count, nil}

The repository also includes the Dockerfile, which is almost exactly the same as the multi-stage Dockerfile introduced in the previous modules. It uses the official Docker Go image to build the application and then builds the final image by placing the compiled binary into the much slimmer, distroless image.

Regardless of whether you had updated the old example application, or checked out the new one, this new Docker image has to be built to reflect the changes to the application source code.

Build the application

You can build the image with the familiar build command:

$ docker build --tag docker-gs-ping-roach .

Run the application

Now, run your container. This time you'll need to set some environment variables so that your application knows how to access the database. For now, you’ll do this right in the docker run command. Later you'll see a more convenient method with Docker Compose.

Note

Since you're running your co*ckroachDB cluster in insecure mode, the value for the password can be anything.

In production, don't run in insecure mode.

$ docker run -it --rm -d \ --network mynet \ --name rest-server \ -p 80:8080 \ -e PGUSER=totoro \ -e PGPASSWORD=myfriend \ -e PGHOST=db \ -e PGPORT=26257 \ -e PGDATABASE=mydb \ docker-gs-ping-roach

There are a few points to note about this command.

  • You map container port 8080 to host port 80 this time. Thus, for GET requests you can get away with literally curl localhost:

    $ curl localhostHello, Docker! (0)

    Or, if you prefer, a proper URL would work just as well:

    $ curl http://localhost/Hello, Docker! (0)
  • The total number of stored messages is 0 for now. This is fine, because you haven't posted anything to your application yet.

  • You refer to the database container by its hostname, which is db. This is why you had --hostname db when you started the database container.

  • The actual password doesn't matter, but it must be set to something to avoid confusing the example application.

  • The container you've just run is named rest-server. These names are useful for managing the container lifecycle:

    # Don't do this just yet, it's only an example:$ docker container rm --force rest-server

Test the application

In the previous section, you've already tested querying your application with GET and it returned zero for the stored message counter. Now, post some messages to it:

$ curl --request POST \ --url http://localhost/send \ --header 'content-type: application/json' \ --data '{"value": "Hello, Docker!"}'

The application responds with the contents of the message, which means it has been saved in the database:

{"value":"Hello, Docker!"}

Send another message:

$ curl --request POST \ --url http://localhost/send \ --header 'content-type: application/json' \ --data '{"value": "Hello, Oliver!"}'

And again, you get the value of the message back:

{"value":"Hello, Oliver!"}

Run curl and see what the message counter says:

$ curl localhostHello, Docker! (2)

In this example, you sent two messages and the database kept them. Or has it? Stop and remove all your containers, but not the volumes, and try again.

First, stop the containers:

$ docker container stop rest-server roachrest-serverroach

Then, remove them:

$ docker container rm rest-server roachrest-serverroach

Verify that they're gone:

$ docker container list --allCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

And start them again, database first:

$ docker run -d \ --name roach \ --hostname db \ --network mynet \ -p 26257:26257 \ -p 8080:8080 \ -v roach:/co*ckroach/co*ckroach-data \ co*ckroachdb/co*ckroach:latest-v20.1 start-single-node \ --insecure

And the service next:

$ docker run -it --rm -d \ --network mynet \ --name rest-server \ -p 80:8080 \ -e PGUSER=totoro \ -e PGPASSWORD=myfriend \ -e PGHOST=db \ -e PGPORT=26257 \ -e PGDATABASE=mydb \ docker-gs-ping-roach

Lastly, query your service:

$ curl localhostHello, Docker! (2)

Great! The count of records from the database is correct although you haven't only stopped the containers, but you've also removed them before starting new instances. The difference is in the managed volume for co*ckroachDB, which you reused. The new co*ckroachDB container has read the database files from the disk, just as it normally would if it were running outside the container.

Wind down everything

Remember, that you're running co*ckroachDB in insecure mode. Now that you've built and tested your application, it's time to wind everything down before moving on. You can list the containers that you are running with the list command:

$ docker container list

Now that you know the container IDs, you can use docker container stop and docker container rm, as demonstrated in the previous modules.

Stop the co*ckroachDB and docker-gs-ping-roach containers before moving on.

Better productivity with Docker Compose

At this point, you might be wondering if there is a way to avoid having to deal with long lists of arguments to the docker command. The toy example you used in this series requires five environment variables to define the connection to the database. A real application might need many, many more. Then there is also a question of dependencies. Ideally, you want to make sure that the database is started before your application is run. And spinning up the database instance may require another Docker command with many options. But there is a better way to orchestrate these deployments for local development purposes.

In this section, you'll create a Docker Compose file to start your docker-gs-ping-roach application and co*ckroachDB database engine with a single command.

Configure Docker Compose

In your application's directory, create a new text file named docker-compose.yml with the following content.

version: '3.8'services: docker-gs-ping-roach: depends_on: - roach build: context: . container_name: rest-server hostname: rest-server networks: - mynet ports: - 80:8080 environment: - PGUSER=${PGUSER:-totoro} - PGPASSWORD=${PGPASSWORD:?database password not set} - PGHOST=${PGHOST:-db} - PGPORT=${PGPORT:-26257} - PGDATABASE=${PGDATABASE:-mydb} deploy: restart_policy: condition: on-failure roach: image: co*ckroachdb/co*ckroach:latest-v20.1 container_name: roach hostname: db networks: - mynet ports: - 26257:26257 - 8080:8080 volumes: - roach:/co*ckroach/co*ckroach-data command: start-single-node --insecurevolumes: roach:networks: mynet: driver: bridge

This Docker Compose configuration is super convenient as you don't have to type all the parameters to pass to the docker run command. You can declaratively do that in the Docker Compose file. The Docker Compose documentation pages are quite extensive and include a full reference for the Docker Compose file format.

The .env file

Docker Compose will automatically read environment variables from a .env file if it's available. Since your Compose file requires PGPASSWORD to be set, add the following content to the .env file:

PGPASSWORD=whatever

The exact value doesn't really matter for this example, because you run co*ckroachDB in insecure mode. Make sure you set the variable to some value to avoid getting an error.

Merging Compose files

The file name docker-compose.yml is the default file name which docker compose command recognizes if no -f flag is provided. This means you can have multiple Docker Compose files if your environment has such requirements. Furthermore, Docker Compose files are... composable (pun intended), so multiple files can be specified on the command line to merge parts of the configuration together. The following list is just a few examples of scenarios where such a feature would be very useful:

  • Using a bind mount for the source code for local development but not when running the CI tests;
  • Switching between using a pre-built image for the frontend for some API application vs creating a bind mount for source code;
  • Adding additional services for integration testing;
  • And many more...

You aren't going to cover any of these advanced use cases here.

Variable substitution in Docker Compose

One of the really cool features of Docker Compose isvariable substitution. You can see some examples in the Compose file, environment section. By means of an example:

  • PGUSER=${PGUSER:-totoro} means that inside the container, the environment variable PGUSER shall be set to the same value as it has on the host machine where Docker Compose is run. If there is no environment variable with this name on the host machine, the variable inside the container gets the default value of totoro.
  • PGPASSWORD=${PGPASSWORD:?database password not set} means that if the environment variable PGPASSWORD isn't set on the host, Docker Compose will display an error. This is OK, because you don't want to hard-code default values for the password. You set the password value in the .env file, which is local to your machine. It is always a good idea to add .env to .gitignore to prevent the secrets being checked into the version control.

Other ways of dealing with undefined or empty values exist, as documented in thevariable substitution section of the Docker documentation.

Validating Docker Compose configuration

Before you apply changes made to a Compose configuration file, there is an opportunity to validate the content of the configuration file with the following command:

$ docker compose config

When this command is run, Docker Compose reads the file docker-compose.yml, parses it into a data structure in memory, validates where possible, and prints back the reconstruction of that configuration file from its internal representation. If this isn't possible due to errors, Docker prints an error message instead.

Build and run the application using Docker Compose

Start your application and confirm that it's running.

$ docker compose up --build

You passed the --build flag so Docker will compile your image and then start it.

Note

Docker Compose is a useful tool, but it has its own quirks. For example, no rebuild is triggered on the update to the source code unless the --build flag is provided. It is a very common pitfall to edit one's source code, and forget to use the --build flag when running docker compose up.

Since your set-up is now run by Docker Compose, it has assigned it a project name, so you get a new volume for your co*ckroachDB instance. This means that your application will fail to connect to the database, because the database doesn't exist in this new volume. The terminal displays an authentication error for the database:

# ... omitted output ...rest-server | 2021/05/10 00:54:25 failed to initialise the store: pq: password authentication failed for user totororoach | *roach | * INFO: Replication was disabled for this cluster.roach | * When/if adding nodes in the future, update zone configurations to increase the replication factor.roach | *roach | co*ckroachDB node starting at 2021-05-10 00:54:26.398177 +0000 UTC (took 3.0s)roach | build: CCL v20.1.15 @ 2021/04/26 16:11:58 (go1.13.9)roach | webui: http://db:8080roach | sql: postgresql://root@db:26257?sslmode=disableroach | RPC client flags: /co*ckroach/co*ckroach <client cmd> --host=db:26257 --insecureroach | logs: /co*ckroach/co*ckroach-data/logsroach | temp dir: /co*ckroach/co*ckroach-data/co*ckroach-temp349434348roach | external I/O path: /co*ckroach/co*ckroach-data/externroach | store[0]: path=/co*ckroach/co*ckroach-dataroach | storage engine: rocksdbroach | status: initialized new clusterroach | clusterID: b7b1cb93-558f-4058-b77e-8a4ddb329a88roach | nodeID: 1rest-server exited with code 0rest-server | 2021/05/10 00:54:25 failed to initialise the store: pq: password authentication failed for user totororest-server | 2021/05/10 00:54:26 failed to initialise the store: pq: password authentication failed for user totororest-server | 2021/05/10 00:54:29 failed to initialise the store: pq: password authentication failed for user totororest-server | 2021/05/10 00:54:25 failed to initialise the store: pq: password authentication failed for user totororest-server | 2021/05/10 00:54:26 failed to initialise the store: pq: password authentication failed for user totororest-server | 2021/05/10 00:54:29 failed to initialise the store: pq: password authentication failed for user totororest-server exited with code 1# ... omitted output ...

Because of the way you set up your deployment using restart_policy, the failing container is being restarted every 20 seconds. So, in order to fix the problem, you need to log in to the database engine and create the user. You've done it before inConfigure the database engine.

This isn't a big deal. All you have to do is to connect to co*ckroachDB instance and run the three SQL commands to create the database and the user, as described inConfigure the database engine.

So, log in to the database engine from another terminal:

$ docker exec -it roach ./co*ckroach sql --insecure

And run the same commands as before to create the database mydb, the user totoro, and to grant that user necessary permissions. Once you do that (and the example application container is automatically restarts), the rest-service stops failing and restarting and the console goes quiet.

It would have been possible to connect the volume that you had previously used, but for the purposes of this example it's more trouble than it's worth and it also provided an opportunity to show how to introduce resilience into your deployment via the restart_policy Compose file feature.

Testing the application

Now, test your API endpoint. In the new terminal, run the following command:

$ curl http://localhost/

You should receive the following response:

Hello, Docker! (0)

Shutting down

To stop the containers started by Docker Compose, press ctrl+c in the terminal where you ran docker compose up. To remove those containers after they've been stopped, run docker compose down.

Detached mode

You can run containers started by the docker compose command in detached mode, just as you would with the docker command, by using the -d flag.

To start the stack, defined by the Compose file in detached mode, run:

$ docker compose up --build -d

Then, you can use docker compose stop to stop the containers and docker compose down to remove them.

You can run docker compose to see what other commands are available.

Wrap up

There are some tangential, yet interesting points that were purposefully not covered in this chapter. For the more adventurous reader, this section offers some pointers for further study.

Persistent storage

A managed volume isn't the only way to provide your container with persistent storage. It is highly recommended to get acquainted with available storage options and their use cases, covered inManage data in Docker.

co*ckroachDB clusters

You ran a single instance of co*ckroachDB, which was enough for this example. But, it's possible to run a co*ckroachDB cluster, which is made of multiple instances of co*ckroachDB, each instance running in its own container. Since co*ckroachDB engine is distributed by design, it would have taken surprisingly little change to your procedure to run a cluster with multiple nodes.

Such distributed set-up offers interesting possibilities, such as applying Chaos Engineering techniques to simulate parts of the cluster failing and evaluating your application's ability to cope with such failures.

If you are interested in experimenting with co*ckroachDB clusters, check out:

Other databases

Since you didn't run a cluster of co*ckroachDB instances, you might be wondering whether you could have used a non-distributed database engine. The answer is 'yes', and if you were to pick a more traditional SQL database, such asPostgreSQL, the process described in this chapter would have been very similar.

In this module, you set up a containerized development environment with your application and the database engine running in different containers. You also wrote a Docker Compose file which links the two containers together and provides for easy starting up and tearing down of the development environment.

In the next module, you'll take a look at one possible approach to running functional tests in Docker.

Run your tests

Use containers for Go development (2024)

FAQs

Should I use Containers for development? ›

Containers are lightweight, portable, and can be easily replicated, making them a great tool for developers. Dev containers offer a consistent development environment that can be shared with others, ensuring that everyone is working on the same setup. Using dev containers in VS Code is straightforward.

How to build a Docker container for Golang? ›

Build your Go image
  1. Docker running locally. Follow the instructions to download and install Docker.
  2. An IDE or a text editor to edit files. Visual Studio Code is a free and popular choice but you can use anything you feel comfortable with.
  3. A Git client. ...
  4. A command-line terminal application.

Is Docker built with Go? ›

Dockerfiles are not written in go. Though, the implementation that docker did/uses is written in go. There are plenty of tools that can generate an OCI compliant container image based on a Dockerfile - they don't necessarily have to be written in go.

Why do developers like to use Containers? ›

Benefits of containers

Containers require less system resources than traditional or hardware virtual machine environments because they don't include operating system images. Applications running in containers can be deployed easily to multiple different operating systems and hardware platforms.

Are containers still relevant? ›

Containers are here to stay

The use of containers is growing like crazy within enterprises, and the vendors in this space have all of the funding they need to make the improvements that enterprises and cloud providers demand.

Why not to use containers? ›

Containers are volatile in nature. Meaning once they are shut off, all the data stored in them are gone forever. That's why one needs to rely on a persistent data system. It is also equally crucial to monitor performance and security issues.

Why docker chose Golang? ›

Go, with its ability to execute low-level system calls, made building Docker faster and less time-consuming than other programming languages. It is an advantage to the Docker team because Docker doesn't require many system resources to run it.

How to create Dockerfile for Go application? ›

Creating your Dockerfile
  1. Include an app directory for your source code.
  2. Copy everything from the root directory into your app directory.
  3. Copy your Go files into your app directory and install dependencies.
  4. Build your app with configuration.
  5. Tell your Docker container to listen on a certain port at runtime.
Nov 2, 2022

What language is docker written in? ›

Docker is written in the Go programming language and takes advantage of several features of the Linux kernel to deliver its functionality.

Is Docker being phased out? ›

Docker will no longer be supported as a container runtime on Windows after September 1, 2022.

Is Kubernetes written in Go? ›

Many of its top contributors had previously worked on Borg; they codenamed Kubernetes "Project 7" after the Star Trek ex-Borg character Seven of Nine and gave its logo a seven-spoked ship's wheel (designed by Tim Hockin). Unlike Borg, which was written in C++, Kubernetes is written in the Go language.

Is rust better than Go? ›

Performance. Both Go and Rust are very fast. However, while Go's design favours fast compilation, Rust is optimised for fast execution. Rust's run-time performance is also more consistent, because it doesn't use garbage collection.

Should I Containerize my app? ›

Containerized applications make it possible to quickly create a consistent and lightweight runtime environment for an application. These advantages give DevOps teams more agility as they build, test, deploy, and iterate applications.

Why not use Docker? ›

Things Docker Can't Do Well

Docker containers have less overhead than virtual machines. But Docker does not have zero overhead. The only way to get true bare-metal speed from an application is to run it directly on a bare-metal server, without using containers or virtual machines. Provide cross-platform compatibility.

Are containers a good investment? ›

Shipping containers hold both material and functional value, so they're always a good investment option.

Does a shipping container make a good workshop? ›

The fact that shipping containers are highly secure makes them great choices for workshops. Shipping containers are made of high-grade steel, making them wind and water tight, weatherproof, and fireproof.

Does DevOps use containers? ›

DevOps is a methodology to deliver applications faster as different teams work in tandem—not in separate silos. Containers play a vital role in enabling this development velocity.

Why containers are preferred over VMs? ›

Containers have a number of benefits over traditional virtualization methods. As they are more lightweight and portable than VMs, containers support decomposition of a monolith into microservices. Containers are faster to manage and deploy than VMs, which can save time and money with application deployment.

Top Articles
Latest Posts
Article information

Author: Laurine Ryan

Last Updated:

Views: 6138

Rating: 4.7 / 5 (57 voted)

Reviews: 80% of readers found this page helpful

Author information

Name: Laurine Ryan

Birthday: 1994-12-23

Address: Suite 751 871 Lissette Throughway, West Kittie, NH 41603

Phone: +2366831109631

Job: Sales Producer

Hobby: Creative writing, Motor sports, Do it yourself, Skateboarding, Coffee roasting, Calligraphy, Stand-up comedy

Introduction: My name is Laurine Ryan, I am a adorable, fair, graceful, spotless, gorgeous, homely, cooperative person who loves writing and wants to share my knowledge and understanding with you.