IBM Cognos TM1 Non-Uniform Memory Access
There have been many changes in this industry over the years; one of the biggest has been virtualization technology. There are many considerations when selecting a platform for IBM Cognos TM1. TM1 uses NUMA or Non-Uniform Memory Access. You may have heard this term before but understanding the effects and how it will relate to TM1 can help you make that decision.
With two major types of Parallel Architectures that are prevalent in the industry; Distributed Memory Architecture and Shared Memory Architecture.
Shared Memory has two types: Uniform Memory Access (UMA), and Non-Uniform Memory Access (NUMA).
SMP Symmetric Multiprocessor or UMA
This is the most widely used as each processor has equal access to memory and I/O. When more processors are added to the architecture the CPU Bus will suffer from limitations, overall performance is diminished. The figure below shows in more detail how all processors share the same memory resource. This creates an issue of Cache Coherency, meaning every “read” operation must reflect every “write” operation. This is where the “Hardware” level takes care of the cache coherency otherwise known as CC-UMA. This type of architecture is usually found in Laptops, Desktops and other general purpose CPUs.
Shared Memory Architecture or UNMA
Architecture of the NUMA systems CPUs are split into smaller “nodes” allowing each node to have its own processor and memory. The connection to a larger system with an interconnected BUS is now possible. In short we boot overall performance by scheduling threads on processors that are in the same node as memory being used. It will consume memory from other nodes to satisfy any additional requests for memory. In the figure below you can see there is essentially a “quick path interconnect architecture” that will flow from Memory, CPU, Input/Output Hub, I/O Controller Hub.
The major benefits is that memory is directly connected to the CPUs instead of accessing the Memory Controller.
With NUMA there are a set of tools and libraries for programmers to rely on. These allow the programmers to set implicit parallelism as well as improve over all efficiency. They provide functions to obtain page residency, Threads, Message Passing, and Data Parallelism. When the NUMA model is used you can clearly see the scale abilities between NMA and NUMA.
Provided the information above IBM has adapted to NUMA as an architecture design choice, with good reason. With options like scale-ability and low latency as a primary focus. With all of the modern changes in Hardware today, there is a demand for changes in some of the approaches developers use, leveraging built in libraries and tools for development along with Operating System policies (scheduling, processor affinity and paging). These tools allow IBM Cognos TM1 to take full advantage of Memory and In-Memory capabilities to give you the best possible outcomes.
Should TM1 be put on a Physical server or a VM server?
The architecture of the virtualization model is to leverage Memory and I/O. This becomes an issue when leveraging NUMA as the VM O/S uses more memory that the single NUMA “node” contains, and will capture more memory from the secondary “node” in the NUMA architecture. While this will not affect the stability or even prevent the software from working; it adds increased overhead and it will reduce the workload’s performance.
An example would be if you have a two ten-core processors equaling twenty cores and a total of 256 GB of memory installed. Each NUMA “node will consume 12.8 GB of memory, so if each virtual server is allocated with less than 12.8 GB of memory the chances of running on a single node increases; however it isn’t possible as most TM1 models can consume well over this amount. There are a number of things to consider when selecting this type of platform as workload re-balancing as this will disrupt the NUMA model. To see the full benefit of TM1 performance consider it Best Practice to select a Physical server based on the NUMA architecture.
Please feel free to reach out to use with any questions you might have on this topic.