ion's SR-71mach6 SpeedServer™ is
an all SSD server optimized to deliver well more than
1 million IOPS performing random reads. The SpeedServer is built on the latest
Intel® Xeon® Scalable processors with the
fastest memory and server NICs in a compact, cost-effective, and low-power system.
ion SR-71mach6 SpeedServer is based on Intel® Data Center SSDs or Optane™ 905P SSDs
Intel® Flash SSDs are
by ion for greater endurance and better random write performance.
ion SR-71mach6 SpeedServer is only offered with Intel SSDs which feature an Uncorrectable Bit Error Rate (UBER) of
1 sector per 1017 bits read.
"All-Flash Array, or AFA"?
Well, not exactly. All SSD, yes,
but ion's SR-71mach6 SpeedServer™ can be configured with both
Intel® DC Flash SSDs and/or
Intel® Optane™ SSDs based on 3D XPoint™ technology.
More importantly, ion's SR-71mach6 SpeedServer™ is a
Software like Open-E JovianDSS or Microsoft® Windows® Storage Spaces Direct
or VMware® vSAN™ can present the server as a storage resource,
but it can also be a
running the operating system of your choice with
no secret sauce.
If the requirement is for a server with tiered storage performance, any or all of those technologies can be combined
for an optimum mix of performance and capacity.
No fabric can deliver latency as low as a direct NVMe connection to your storage.
DCIG suggests that
"Any organization that has yet to adopt an all-flash storage infrastructure for all active workloads is operating at a competitive disadvantage."
ION's SR-71 SpeedServer reveals its performance in a number of parameters generated by benchmarks.
The Web is full of benchmarks of all sorts. Before looking at benchmark results, one must try to find a benchmark,
or a specific test in a benchmark suite, that has some correlation to the application environment in question.
For storage benchmarks like these, that means an understanding of typical block sizes used in I/O, the proportion of read versus write,
and the proportion of random versus sequential access, among other things. Whether the application is able to queue I/O requests -
submit multiple requests before waiting for results - is also a key factor. A result that talks about how many small, sequential reads
a system can deliver is of no value at all if the application in question will be typically performing
bigger I/Os in a random write/read manner.
Some definitions may help with understanding of the data on the Performance tabs:
Block Size: The size of each I/O (read or write) in the test, usually in units of kB (really kiB = 1024 bytes).
Sequential or Streaming: I/Os are performed in the order in which data is organized on the storage. Like big files written or read from beginning to end.
Random: I/Os are not done in order but spread across the entire file or entire volume, randomly.
Outstanding I/Os or Queue Depth or Q: The number of read or write requests issued to storage before waiting for a result.
IOPS: Input/Ouput Operations per Second, the number of reads and/or writes performed each second using the specified I/O pattern of block size, randomness and queueing.
MBps: MegaBytes per Second, the bandwidth being delivered by the specified I/O pattern. Typically , this is really MiBps, units of 1024x1024=1,048,576 Bytes per second.
Latency: How long, on average, is the delay between issuing a read or write request and receiving the response. This is typically expressed in milliseconds, ms. On spinning disk based systems, the latency is determined by the mechanical activities of moving the actuator and waiting for the desired block to spin around to the head. In the best case, each of those on a fast disk is about 2ms. The results reported here include many tests where the average latency was well below 1ms.
The tests reported include results of testing within the SR-71 SpeedServer, running the benchmark on the server, as well as storage network tests. The tabs on the left are for internal testing; towards the right areing iSCSI, Fibre Channel and Windows Server tabs cover the ability of the SR-71 SpeedServer to deliver data over a network.
Most of these performance tabs include details about the system under test and have links to the full, raw test results for deeper analysis of what was actually tested and how the system behaved. Without this kind of information, there is no way to know whether the result reported has any relation to the problem that needs to be solved. That information is presented to provide context for the numbers. There are many results reported elsewhere that lack that kind of context.