SAP MaxDB on Docker

Using containers for development and testing environments is pretty much state of the art these days. This way, developers and operations guys alike are able to gain speed by reducing the time wasted on handling various infrastructural aspects just to be able to work at all. Consequently, a load of applications and runtime environments are already available to run inside containers – but not all. SAP MaxDB is on our list of tools in production use, and this is one of these technologies which is no first-class citizen on Docker yet. But it’s not too difficult to change this, either. Here’s our current approach…

What’s the point?

SAP MaxDB is a relational DBMS offered by SAP AG. It’s a proven, stable product with a long “history” both in technological and in licensing terms. We got to know this tool in around 2005 when MaxDB was provided under an open-source (GPL) license and even, at version 7.5, available pre-packaged in Debian main. Things changed considerably ever since, the application now is offered exclusively by SAP again, provided “free-of-charge” as a community version for use with non-SAP applications and without any “official” SAP support. We still enjoy using this platform however, and meanwhile excellent enterprise-grade support is provided by 7P infolytics – for administrative questions and beyond. It’s safe to say they saved us more than once.

So at the moment we’re torn here: From one point of view, preferring open source software in our stack, current SAP MaxDB licensing is a difficult thing to deal with. Then again, changing relevant components in a grown software stack always is difficult, and even more so if talking about a core SQL database containing a significant amount of data and having a bunch of relevant applications tied to it. We’ll still have to live with it for a while, and in the meantime it should be handled as well as possible. This is where Docker comes in.

Local installation

To build a MaxDB Docker image, start out by installing plain SAP MaxDB on a Linux machine or VM. In our case we do have an archive of installer packages cached locally, anyone else can get a download here, which requires an SAP SDN login however.

Straight ahead:

  • Download the Community Edition. Don’t let the “Trial Version” on the web page scare you away. Unzip this somewhere on your drive.
  • Use the SDBINST text based installer to install the database to your machine. I don’t use the interactive installer but rather start SDBINST with all required parameters just like this: ./SDBINST -global_prog /opt/maxdb/sdb/globalprograms -global_data /opt/maxdb/sdb/globaldata -o root -g root -i MaxDB -path /opt/maxdb/MaxDB -description "maxdb install" -network_port 7200

There are two basic changes made, compared to a MaxDB default installation:

  • All data installed by this process should go to /opt/maxdb/. This is not strictly required for building a Docker container but it eases things a bit.
  • Owner (usually sdb) and group (usually sdba) are forced to be root. This is not nice and maybe not necessary but helps getting started.

By then, you should have a local MaxDB installation that should be able to run on your system. You so far don’t have any databases though, and you also don’t have a running container.

Building and running the container

… should be next. One of the annoying things about doing so for applications that don’t come packaged for any operating system distribution is that they possibly spread out files and data all across the local file system. By providing the install folders specified above, this is at least somewhat reduced to merely three file system resources you need. My build configuration can be found at github; you could start out just there:

  • Clone or download the project folder. Unlike in virtually any other situation you should be root all the time.
  • Copy and move the MaxDB installation data from your local file system to there, full paths included:
    • /opt/maxdb – contains the actual database engine and all the files belonging to it
    • /etc/opt/sdb – contains the MaxDB installation registry required by the runtime tools to find its resources
    • /var/lib/sdb – resource mostly for the database servers shared memory handling
  • Run docker build . -t local/maxdb in this folder.

At this point i quietly assume there is a running Docker installation available on your machine.

You should then be able to run containers using your newly created MaxDB image like this: docker run –name maxdb -d -p 7200:7200 -p 7210:7210 local/maxdb:latest. This will start a local container named “maxdb” with an ubuntu:latest base image. Startup procedure will startup MaxDB x_server, create an empty database and bring it online and expose the ports required for external applications to talk to the database. If everything worked well, by now you should be able to connect to this instance using in example a JDBC tool of your choice (I prefer and recommend SQLWorkbench) using JDBC URL jdbc:sapdb://localhost/TESTDB and credentials SQLUSER,SQLUSER.

Customizing the container

This way you get a MaxDB installation running inside a local container which your applications can talk to. If you want to change how the container works, have a look at and db.ini; these make use of MaxDB dbmcli utility, and the options you most likely want to change (name of the initial database create, credentials for database users, …) are in there. Generally that’s not rocket science so if you have rudimentary MaxDB experience you’ll figure out what to tweak pretty quickly.

TODOs and limitations

There still are things to be changed about this of course: First and foremost, the container should not run as root. This require a bit more fiddling with users inside the container and preserving file permissions and SUID bits on some of the binaries. Need to find a good solution for that. The second approach is that, this way, you end up with a database that’s empty. For testing environments, this might not always be what you want. Asides this, obviously everything possible to Docker can be done with MaxDB too, including mapping external data volumes and stores for keeping persistent databases – which I’ll be evaluating on some of our testing systems next. It’s just a matter of effort – and requirements, as usual. 😉

Leave a Reply

Your email address will not be published. Required fields are marked *