Intel shares features of Xeon Phi processors Knights Landing

Intel today shared details of its Intel Xeon Phi processors (Knights Landing) that will offer up to 16GB high-bandwidth, on-package memory to deliver 5 times better bandwidth compared to DDR4 memory, 5 times better energy efficiency and 3 times more density than current GDDR-based memory.

When combined with Intel Omni Scale Fabric, the new memory solution will allow Knights Landing to be installed as an independent compute building block, saving space and energy by reducing the number of components.

Knights Landing will be available as a standalone processor mounted directly on the motherboard socket in addition to the PCIe-based card option.

Intel said the socketed option removes programming complexities and bandwidth bottlenecks of data transfer over PCIe, common in GPU and accelerator solutions.

Powered by more than 60 HPC-enhanced Silvermont architecture-based cores, Knights Landing is expected to deliver more than 3 TFLOPS of double-precision performance and three times the single-threaded performance compared with the current generation.


As a standalone server processor, Knights Landing will support DDR4 system memory comparable in capacity and bandwidth to Intel Xeon processor-based platforms, enabling applications that have a much larger memory footprint.

Knights Landing will be binary-compatible with Intel Xeon processors.

Knights Landing and Intel Omni Scale Fabric controllers will be available as separate PCIe-based add-on cards.

There is application compatibility between currently available Intel True Scale Fabric and future Intel Omni Scale Fabric, so customers can transition to new fabric technology without change to their applications.

For customers purchasing Intel True Scale Fabric today, Intel will offer a program to upgrade to Intel Omni Scale Fabric when it’s available.

Knights Landing processors are scheduled to power HPC systems in the second half of 2015. In April the National Energy Research Scientific Computing Center (NERSC) announced an HPC installation planned for 2016, serving more than 5,000 users and over 700 extreme-scale science projects.