This topic describes a high-level process for determining how much memory (whether heap
or off-heap) to allocate on your data stores.
For all members that will act as data stores, you must determine the memory requirements and
table configuration that will provide the best performance given the hardware resources of your
production deployment. During development, you typically iterate over schema versions and enable
or disable the usage of certain product features (for example event queues, WAN and so on), and
the volumen of your data is usually small. However, when you start using a larger data set closer
to your production data set, we recommend performing the following steps:
- Create your schema with the test cluster along with indexes and all other features you
intend to use in production that will consume memory (for example, asynchronous event queues,
WAN, and so on.)
- Decide if you want to use off-heap memory. You will typically benefit from using off-heap
memory when data volume is high and you have at least 150GB of memory on each machine. See Storing Tables in Off-Heap Memory.
- Load a small subset of your representative production data. Try to use a data set that is as
close as possible to production data.
- Create the test cluster with allocated heap and if required, allocated off-heap memory.
Over-allocate for the sample data set based on the estimates of your production data set size.
See Estimating GemFire XD Heap Overhead and Table Memory Requirements
for overhead estimations.
- Use built-in system procedures or your own custom program to load the rest of your
- Examine the SYS.MEMORYANALYTICS table. This table should provide information on the memory
consumed by tables, indexes, and so on. See Viewing Memory Usage in SYS.MEMORYANALYTICS
- Test and revise the memory allocation based on the information you view in the
SYS.MEMORYANALYTICS table as required. Determine the allocations and configuration you will use