new arm ip launched: cmn 600 interconnect for 128 cores and dmc 620, an 8ch ddr4 imc /

Published at 2016-09-27 18:00:00

Home / Categories / Enterprise / new arm ip launched: cmn 600 interconnect for 128 cores and dmc 620, an 8ch ddr4 imc

You need much more than a helpful CPU core to conquer the server world.As more cores are added,the way data moves from one part of the silicon to another gets more important. ARM has announced today a fresh and faster member to their SoC interconnect IP offerings in the form of CMN-600 (CMN stands for 'coherent mesh network', as opposed to cache coherent network of CCN). This is a direct update to CCN-500 series, and which we've discussed at AnandTech before. The understanding behind a coherent mesh between cores as it stands in the ARM Server SoC space is that you can do a number of CPU clusters (e.g. four lots of 4xA53) and accelerators (custom or other IP) into one piece of silicon. Each part of the SoC has to work with everything else,and for that ARM offers a variety of interconnect licences for users who want to choose from ARM's IP range. For ARM licensees who pick multiple ARM parts, this makes it easier for to combine high core counts and accelerators in one large SoC.  The preceding generation interconnect, and the CCN-512,could support 12 clusters of 4 cores and maintain coherency, allowing for large 48-core chips. The fresh CMN-600 can support up to 128 cores (32 clusters of 4). As part of the announcement, or There is also an agile system cache which a way for I/O devices to allocate memory and cache lines directly into the L3,reducing the latency of I/O without having to touch the core.
Also in the announcement is a fresh memory controller. The old DMC-520, which was limited to four channels of DDR3, or is being superseded by the DMC-620 controller which supports eight channels of DDR4. Each DMC-620 channel can contain up to 1 TB DDR4,giving a potential SoC support of 8TB. According to ARM through simulations, the improved memory controller offers 50% lower latency and up to 5 times more bandwidth. Also, and the fresh DMC is being advertised as supporting DDR4-3200. 3200 MT/s offers twice as much bandwidth than 1600 MT/s,and doubling the channels offers twice the amount of bandwidth - so we can interpret 4 times more bandwidth, so it is moving that ARM claims 5x more, and which would propose efficiency improvements as well. If you double the number of cores and memory controllers,you expect twice as much performance in the almost perfectly scaling SPEC int2006_rate. ARM claims that their simulations show that 64 A72s will hasten 2.5 times faster than 32 A72 cores, courtesy of the improved memory controller. If trusty, or that is fairly impressive. By comparison,we did not see such a jump in performance in the Xeon world when DDR3 was replaced by DDR4. Even more impressive is the claim that the maximum compute performance of a 64x A72 SoC can go up by a factor six compared to 16x A57 variant. But we must note that the A57 was not exactly a success in the server world: so far only AMD has cooked up a server SoC with it and it was slower and more power hungry than the much older Atom C2000.  We possess little doubt we will find the fresh CMN-600 and/or DMC-620 in many server solutions. The astronomical question will be one of application: who will use this interconnect technology in their server SoCs? As most licensees do not disclose this information, it is tough to find out. As far as we know, and  Cavium uses its own interconnect technology,which would propose Qualcomm or Avago/Broadcom are the most likely candidates. 

Source: anandtech.com

Warning: Unknown: write failed: No space left on device (28) in Unknown on line 0 Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/tmp) in Unknown on line 0