반응형
반응형
반응형

요즘 뉴스를 보면 실업률이 낮다는 이야기가 자주 나옵니다. 2025년 10월 기준으로 한국의 실업률은 2.6%를 기록했습니다.
 2.6%라는 숫자만 보면 "한국 경제가 좋은 건가?" 싶기도 한데, 실제로는 조금 더 복잡한 이야기가 있습니다.

 낮은 실업률, 정말 좋은 신호일까

 통계청 자료를 보면 지난 몇 년간 실업률은 계속 낮은 수준을 유지하고 있습니다. 2025년 9월에는 2.5%까지 떨어졌다가 10월에
 2.6%로 소폭 상승했죠. 언뜻 보기엔 일자리가 많고 경제가 건강하다는 신호처럼 보입니다.

 하지만 여기에는 함정이 있습니다. 실업률이라는 건 '구직 활동을 하고 있는 사람' 중에서 일자리를 못 구한 비율입니다. 아예
 구직을 포기한 사람들은 통계에 잡히지 않습니다. 최근 한국개발연구원(KDI) 분석에 따르면, 실업률이 낮은 이유 중 하나가
 근로연령층의 구직 의향 감소라는 점이 지적되고 있습니다.

 청년층은 다른 이야기를 하고 있다

 전체 실업률은 낮지만 청년층 상황은 좀 다릅니다. 15~29세 청년층의 고용률은 45.0%로 3년 연속 하락했습니다. 청년 실업률은
 4.8% 수준으로, 전체 실업률의 거의 두 배에 달합니다.

 20대 후반의 경우 고용률이 71.8%로 전년 대비 0.7%p 하락했는데, 이건 단순히 일자리가 없어서라기보다는 괜찮은 일자리를 찾기
 어렵다는 반증일 수도 있습니다. 취업 준비를 하거나, 아르바이트를 전전하거나, 아예 경제활동 자체를 미루는 청년들이 늘어나고
 있다는 뜻이죠.

 취업자는 늘었는데 왜 체감은 안 될까

 흥미로운 점은 취업자 수는 실제로 증가하고 있다는 겁니다. 2025년 9월 기준으로 31만 2천 개의 일자리가 추가되어 19개월 만에
 가장 강력한 고용 증가를 기록했습니다. 총 취업자는 2915만 명으로 늘어났죠.

 그런데 왜 거리에서 만나는 사람들은 "취업이 어렵다"고 할까요. 여기에는 몇 가지 이유가 있습니다. 첫째, 일자리의 질입니다.
 정규직보다 비정규직, 계약직, 단기 일자리가 더 많이 늘어나고 있습니다. 둘째, 일자리가 늘어나는 분야와 청년들이 원하는
 분야가 다릅니다. 고령층 일자리나 서비스업 중심으로 늘어나고 있는데, 청년들이 선호하는 전문직이나 대기업 일자리는 여전히
 좁은 문입니다.

 실업률보다 중요한 것들

 결국 실업률이라는 하나의 숫자만으로는 고용 시장의 전체 그림을 보기 어렵습니다. 고용의 질, 청년층의 취업 현실, 구직
 단념자의 증가 같은 요소들을 함께 봐야 합니다.

 2026년 들어서도 이런 추세는 계속될 것으로 보입니다. 표면적인 실업률은 낮게 유지되겠지만, 청년층의 고용 불안과 일자리
 양극화는 여전히 해결해야 할 과제로 남아 있습니다. 숫자가 좋아 보인다고 해서 모든 게 좋은 건 아니라는 걸, 우리는 체감하고
 있습니다.

 통계는 거짓말을 하지 않지만, 전체를 말해주지도 않습니다. 실업률 2.6%라는 숫자 뒤에는 여전히 일자리를 찾지 못하고 있는
 청년들, 좋은 일자리를 기다리며 준비하는 사람들, 그리고 아예 구직을 포기한 사람들의 이야기가 숨어 있습니다.

반응형
반응형

Memory interleaving is a technique where consecutive blocks of the physical address space are distributed (“striped”) across multiple memory controllers or home nodes, creating a unified memory region that spans them. Typical ARM Network-on-Chip (NoC) interconnects (e.g. Arm’s CMN-600/700 Coherent Mesh) support configurable interleaving granularity in power-of-two sizes. Common options range from cache-line scale (64B or 128B) up to page-scale (4KB or more)[1]. For example, an ARM CMN configuration might stripe addresses at a 256-byte granularity across three or more home nodes[2], meaning each 256B block of addresses goes to a different node in a round-robin fashion. Smaller granularity means more frequent switching between memory controllers, whereas larger granularity means each controller handles larger contiguous address chunks.

Figure: Illustration of interleaving across two memory controllers with a 1KB stripe size. Alternate 1KB address regions (0–1KB, 1–2KB, 2–3KB, etc.) are mapped to different controllers, forming a unified interleaved address space[3]. In the diagram, Memory Controller 0 (white) handles the 0–1KB, 2–3KB, 4–5KB, … segments while Memory Controller 1 (gray) handles the 1–2KB, 3–4KB, 5–6KB, … segments. This automatic distribution balances traffic across controllers without software having to manage placement.

The interleaving granularity is typically chosen based on system goals. Fine-grained interleaving (e.g. 256B or 1KB stripes) maximizes parallelism by spreading even small memory accesses across controllers, while coarse interleaving (e.g. 4KB stripes) keeps whole blocks (like OS pages) on a single controller. ARM’s NoC hardware allows these modes to be configured to suit the workload; for instance, 3-SN or 6-SN striping modes in CMN hash addresses across 3 or 6 home nodes at 256B granularity in order to distribute load evenly[2].

AXI Burst Transactions and Interleave Boundaries

AXI (Advanced eXtensible Interface) is a burst-based protocol, and AXI masters can issue bursts consisting of multiple data beats. However, the AXI specification imposes a key rule: bursts should not cross 4KB address boundaries[4]. The reason is that crossing a 4KB boundary could mean the burst spans into a different slave region (e.g. a different memory controller or peripheral), which is generally “an impractical situation” and is disallowed by the spec[4]. In practice this means an AXI burst must fit within a 4KB-aligned address window.

When memory interleaving is used with a granularity smaller than 4KB, a single contiguous AXI burst could still target multiple controllers internally, even though it stays within a 4KB region (and thus isn’t illegal by AXI rules). For example, with a 1KB interleaving across two controllers, an 2KB linear burst starting at an aligned address will span two 1KB stripes: half of its addresses belong to “Controller A” and half to “Controller B”. The AXI protocol itself has no knowledge of this split, since the interleaved controllers present one unified address space to the master. It falls to the NoC/interconnect logic to handle the split transparently.

Burst Splitting at the AXI Level: NoC interconnects are designed to chop or split bursts that cross an interleaving boundary so that each portion can be routed to the appropriate memory controller. In our example of a 2KB burst with 1KB stripes, the interconnect (e.g. at the NoC’s master interface unit) will split the single burst into two transactions – one for the first 1KB to Controller A, and one for the second 1KB to Controller B. More generally, if a burst transaction crosses an interleave boundary, the interconnect hardware “chops” the transaction at that boundary[3]. This ensures each sub-burst stays entirely within one memory target. The ARM CoreLink NoC architectures (and similarly, the NoC in Xilinx/AMD Versal) implement this behavior at the NoC entry point. “If a burst transaction is sent to an NMU (NoC Master Unit) and crosses an interleave boundary…the transaction is chopped at the interleave boundary,” so that a single AXI transaction never spans two interleaved regions[5]. The master device still perceives it as one continuous burst overall, but under the hood it has been divided into multiple AXI transfers on the memory side. The AXI write or read responses for the sub-transactions are coordinated such that the original ordering is preserved and the master’s expectations are met (e.g. the data beats return in sequence).

For very fine interleaving (256B, 512B etc.), even moderate-size bursts will be split into many pieces. Consider a 256-byte interleaving: a burst of 1KB (1024 bytes) would be divided into 4 chunks mapped to alternating controllers. The interconnect would issue 4 sub-bursts (each 256B) to the controllers in turn. Conversely, with a coarse 4KB interleaving, that same 1KB burst stays entirely on one controller (no split needed). In fact, with 4KB stripes, any legal AXI burst (which cannot exceed 4KB by rule) will always remain on a single controller. Thus, 4KB interleaving effectively avoids burst splitting, aligning with the AXI boundary rule by design.

NoC Packetization and Data Splitting

On-chip networks (such as ARM’s CMN) transport transactions using packets and flits internally. A high-level AXI or CHI transaction may be broken into smaller packets for routing efficiency or protocol reasons. Interleaving granularity influences how the NoC packetizes and routes the data:

  • Single-Controller Case: If an AXI burst is contained within one interleaved chunk (e.g. a 512-byte burst with 1KB interleave, or any burst under 4KB with 4KB interleave), the NoC can treat it as one transaction targeted to a single home node. The request travels to that node, and the data payload may be sent in one or multiple packets (depending on size). For example, if the NoC’s data packet payload is 64 bytes (commonly the size of a cache line), a 512B read might be delivered as 8 data packets of 64B each, all returning from the same target.
  • Cross-Controller Case: If a burst spans two or more interleaved regions, the NoC generates multiple request packets – one per target region. Each packet carries the address range and length pertaining to its region. These packets can be sent in parallel into the mesh network, each heading to a different memory controller node. The data responses will likewise come back as separate packet streams from each controller, which the interconnect will interleave or concatenate back to fulfill the original AXI burst stream. Notably, the packet-level data splitting corresponds to the interleaving: finer granularity causes the NoC to split the data at finer boundaries, potentially creating more, smaller packets. In the earlier 2KB burst example (1KB stripes), two parallel read request packets would be issued. Each yields ~1KB of data, which might come back as a sequence of packets (e.g. 16×64B packets from Controller A and 16×64B from Controller B, in an interwoven fashion).

Internally, ARM’s coherent interconnect protocol (CHI) often operates on cache-line units, so large bursts are naturally segmented. In fact, the NoC may deliberately fragment bursts into cache-line-sized chunks for transport. For instance, the CMN-700 documentation notes that a remote read burst may be “cracked… into 64B chunks” when forwarded to a home node[6]. This means even if an AXI master issues a long burst, the NoC will handle it as a series of 64-byte packets on the wire. Smaller interleaving granularities (256B, 512B) align well with such chunking – multiple 64B packets will simply be directed round-robin to different controllers. With larger granularity, the entire burst’s packets all go to the same controller (until a 4KB boundary is reached).

It’s important to note that packetization overhead increases with the number of splits. Each sub-transaction carries its own header and routing info. So, a finely interleaved burst that becomes many small packets incurs more header overhead and potentially more coordination logic (to merge responses) than one large packet stream. However, the NoC is optimized for this scenario with dedicated network interface units (NIUs) or RN-F/RN-D components that handle the splitting and reassembly seamlessly.

Impact of Granularity on Performance and Bandwidth

The choice of interleaving size involves trade-offs between parallelism and overhead. Fine-grain interleaving (e.g. 256B or 1KB): This maximizes the number of memory controllers that can be engaged by a single high-bandwidth request stream. It allows more requests to reach different channels in parallel, thereby increasing the achievable memory throughput[7]. In multi-channel memory systems, the interleaving granularity largely determines how many simultaneous accesses can occur – a finer stripe means an access pattern will hop to the next channel more frequently, keeping all channels busy for a sustained sequential access[7]. In other words, fine granularity improves memory-level parallelism and tends to yield higher bandwidth utilization. Studies have shown that very fine interleaving can significantly outperform coarse interleaving in bandwidth-heavy workloads. For example, one research work demonstrated that using a 128B stripe (as opposed to a 4KB stripe) can nearly double effective memory bandwidth in worst-case scenarios[8]. The smaller stripes ensure that even within one OS memory page, data is spread across multiple controllers, preventing any single controller from becoming a bottleneck[9].

However, fine interleaving isn’t free of drawbacks. The increased number of sub-transactions and network packets adds some overhead (extra packet headers, more ACK/NACK handling, etc.), which can slightly increase latency for a given burst. The interconnect must also merge or coordinate multiple responses – this is well within design capabilities, but it adds complexity. Additionally, because fine interleaving distributes even small blocks across all controllers, it means all controllers (and their attached DRAM banks) are active for most memory operations. This can reduce locality (e.g. consecutive cache lines might reside in different DRAM channels, potentially opening multiple DRAM rows) and may increase power usage since all memory channels are engaged. There is also an architectural consideration: extremely fine granularity (e.g. 128B) is smaller than the typical 4KB memory page, which means the operating system cannot direct pages to specific channels – every page is automatically spread across channels[9]. This yields great load balancing, but it removes any software control over channel usage (for NUMA or QoS purposes) and requires that the number of channels be a power of two for the address bit striping to evenly cover all combinations[10].

Coarse-grain interleaving (e.g. 4KB): This effectively assigns entire pages (or large blocks) to a single controller. The benefit is simplicity and locality – an OS page resides wholly in one memory controller, which can be advantageous for page-based allocation or if certain processors are affinity-biased to certain controllers. It minimizes the splitting of AXI bursts: as noted, a 4KB stripe avoids any burst-level splits under the AXI rules. This can slightly reduce overhead and keep transactions atomic on the network. The downside is a potential loss of parallelism. A single streaming access will saturate only one controller until it moves to the next 4KB page. If a workload frequently accesses large contiguous regions, one controller might handle most of the traffic while others sit idle, until a 4KB boundary is crossed. In high-bandwidth scenarios, this can underutilize available memory bandwidth – performance can degrade when coarse interleaving prevents parallel channel usage, especially if the memory controllers individually become bottlenecked[7]. Empirical analyses have shown that coarse interleaving (page-sized or larger) can suffer as core counts and memory demands increase, whereas fine interleaving keeps more channels busy and delivers higher sustained throughput[7].

Medium granularity (e.g. 1KB or 2KB): These offer a compromise. For instance, 1KB stripes will split only when bursts exceed 1KB, and still allow up to four distinct 256B cache lines in a row to reside on different controllers before cycling back (if 2 controllers). Many common cache-coherent transactions (like 64B or 128B cache line fills) won’t notice a difference between 1KB and 4KB interleaving – they’ll just hit one controller. But larger DMA bursts or consecutive cache lines will spread across controllers after a few hundred bytes, improving concurrency. In practice, SoC designers often choose an interleave size that matches typical burst lengths or memory access patterns to balance efficiency. For example, if most bursts are 64B–256B, a 256B stripe might be unnecessarily fine (causing splits for bursts just over 256 bytes); a 1KB stripe would ensure most such bursts stay unsplit while still load-balancing at a page sub-boundary. On the other hand, if the system frequently issues 1KB+ cache refills or larger DMA transfers, using 256B or 512B stripes can ensure those are split and serviced concurrently by multiple controllers for better bandwidth.

Conclusion and Key Takeaways

In ARM-based NoC systems, interleaving granularity has a direct impact on how data is segmented and routed through the interconnect. Fine granularity (256B–1KB) causes the NoC to split bursts into multiple packetized transfers that engage several memory controllers at once, boosting parallel throughput at the cost of a bit more protocol overhead. Coarse granularity (2KB–4KB) keeps bursts intact on a single controller (up to the 4KB AXI limit), simplifying transactions but potentially leaving performance on the table when one channel becomes a bottleneck. The AXI protocol’s 4KB burst boundary rule underpins these behaviors: interleaving of 4KB or larger aligns with the rule to avoid splits, whereas sub-4KB interleaving relies on the interconnect to transparently chop bursts at boundaries[5][4].

Overall, the trade-off is between maximum memory parallelism and transaction overhead/complexity. Industry practice and documentation (Arm’s CMN technical references, Xilinx Versal NoC guides, etc.) highlight that interleaving across controllers can “2x or 4x the bandwidth” available to a single request stream[3], which is a huge benefit for memory-intensive workloads. Academic studies further reinforce that finer interleaving yields higher effective bandwidth utilization in multi-channel memory systems[8]. Designers must balance this against considerations like power, typical access size, and system software needs. In summary, smaller interleaving sizes generally improve throughput by enabling packet-level data splitting across the NoC, while larger sizes favor simplicity and locality by keeping AXI bursts intact. The optimal choice depends on the SoC’s performance targets and workload characteristics, but the mechanism is fundamentally the same: interleaving granularity dictates how the NoC divides and conquers memory transactions across the chip.

Sources: The analysis above is based on ARM CMN-600/700 technical documentation, which details supported interleaving modes and internal hashing/striping mechanisms[1][2], as well as an AMD/Xilinx NoC user guide illustrating burst chopping at interleave boundaries[3]. The AXI specification’s 4KB rule is noted in ARM’s developer materials[4]. An academic study on multi-channel memory systems was referenced to quantify performance impacts of different interleaving granularities[7][8]. These sources collectively underpin the discussion of packet and burst splitting behaviors in modern ARM-based NoC designs.


[1] [2] [6] Arm Neoverse CMN 700 TRM Addendum 108055 0301 01 en | PDF | Computer Architecture | Computer Science

https://www.scribd.com/document/845143330/arm-neoverse-cmn-700-trm-addendum-108055-0301-01-en

[3] [5] Memory Interleaving - 1.1 English - PG313

https://docs.amd.com/r/en-US/pg313-network-on-chip/Memory-Interleaving

[4] What is 4KB address boundary in AXI protocol? - SystemVerilog - Verification Academy

https://verificationacademy.com/forums/t/what-is-4kb-address-boundary-in-axi-protocol/33510

[7] [8] [9] [10] upcommons.upc.edu

https://upcommons.upc.edu/bitstream/handle/2117/11379/05642060.pdf

반응형

'System-on-Chip Design > NoC' 카테고리의 다른 글

Network-on-Chip Topologies and Memory Interleaving in SoC Design  (1) 2025.08.03
Simulation Example  (3) 2025.06.16
Simulation  (2) 2025.06.16
Performance Analysis  (2) 2025.06.16
Buses  (0) 2025.06.16
반응형

Introduction

System-on-Chip (SoC) architectures for many-core processors rely on Network-on-Chip (NoC) interconnects to connect cores, caches, and memory controllers. Two common NoC topologies are the 2D mesh and the 2D torus. In parallel, SoCs employ memory interleaving techniques to boost memory performance by spreading memory accesses across multiple memory banks or controllers. This report explores how NoC topology (especially mesh vs. torus) relates to memory interleaving strategies, explaining how interleaving enhances memory throughput and how it is implemented on different NoC topologies. We also survey the evolution of these concepts in academic research and describe current industry practices (from ARM, Intel, AMD, and NoC IP vendors) with technical examples and references.

NoC Topologies: Mesh vs. Torus

A NoC provides an on-chip communication fabric connecting IP blocks (CPU cores, caches, memory controllers, etc.) via routers. In a 2D mesh, nodes are arranged in a grid with each node connected to its immediate neighbors in the north, south, east, and west directions. Mesh networks are planar and have no wrap-around connections at the edges. In contrast, a 2D torus extends a mesh by linking the opposite edges, so each node has neighbors in all four directions with the network edges “wrapped around”sciencedirect.com. This wrap-around in a torus reduces the maximum and average path length between nodes compared to a mesh (since there are no edge boundaries), improving overall communication bandwidth and reducing latency for distant node communicationsciencedirect.com. Both topologies have been widely studied in on-chip networks and parallel computers due to their regular structure and scalability. In practice, mesh topologies have been more common in commercial many-core chips (e.g. Intel’s mesh in Skylake-SP Xeonstomshardware.com, ARM’s CMN mesh in Neoverse coresanandtech.com), whereas torus topologies are more often seen in larger-scale multiprocessor networks or academic prototypes because the wrap-around links can increase routing complexity and wiring overhead on chip. Nonetheless, torus NoCs remain of interest for their potential performance benefits, and NoC IP generators (like Arteris FlexNoC) even support automatic topology generation for meshes, rings, and toriarteris.com.

Mesh vs. Torus and Memory Access: In a mesh NoC, the physical placement of memory controllers (MCs) on the grid and the distance from cores can lead to non-uniform memory access latencies – a core located in one corner of the mesh will incur more router hops to reach a controller on the opposite corner than to a nearer controller. A torus can alleviate some of these distance issues by providing multiple wrap-around paths, effectively reducing worst-case hop counts. The improved path diversity and shorter diameters of a torus can help avoid congestion hot spots when memory traffic is heavy, by routing around the ring connections. However, both topologies require careful traffic management and memory address mapping to fully utilize their bandwidth. This is where memory interleaving comes into play: by distributing memory accesses across multiple controllers or banks, one can prevent any single region of the NoC from becoming a bottleneck due to concentrated traffic.

Memory Interleaving for Enhanced Memory Performance

Memory interleaving is a classic technique to improve memory bandwidth and latency by splitting memory into multiple modules that can be accessed in parallel. Instead of storing consecutive addresses entirely in one memory bank (or channel), interleaving stripes memory addresses across multiple banks or memory controllers in a fixed pattern. This means that sequential or nearby addresses reside in different physical memory units, allowing multiple memory accesses to proceed concurrently. As a result, a processor can issue back-to-back memory requests without waiting for the previous one to finish, since each request goes to a different memory unit that can operate independentlyinfohub.delltechnologies.com. Interleaving was originally used in large-scale and vector computers to overcome slow memory speeds by overlapping accesses to multiple memory banksacs.pub.ro. In modern SoCs, interleaving is crucial for multichannel DRAM systems: for example, dual-channel memory doubles the data path (128 bits instead of 64 bits) and allows two memory transactions to occur in parallel, effectively feeding the processor with up to 2× the data per cyclemanuais.iessanclemente.netmanuais.iessanclemente.net. More generally, with N channels or controllers interleaved, the peak memory bandwidth can approach N times that of a single channel (assuming ideal load balancing).

From a performance standpoint, interleaving maximizes utilization of all memory resources. It reduces idle time for memory devices and increases parallelism, which is especially beneficial for throughput-oriented workloads. Dell’s server documentation concisely explains that when memory is interleaved, “contiguous memory accesses go to different memory banks” and therefore subsequent accesses need not wait for the previous one to completeinfohub.delltechnologies.com. With all DIMMs or channels in one interleave set (uniform memory region spanning them), the total memory bandwidth is increased since “the distribution of information is divided across several channels… and the total memory bandwidth is increased”infohub.delltechnologies.com. In practice, most systems see maximum memory performance when all memory controllers/channels are interleaved into one unified address space, so that any given memory region is spread across all channelsinfohub.delltechnologies.cominfohub.delltechnologies.com. This ensures that every memory access load is balanced and utilizes the full width of the memory system. Conversely, if interleaving is not used (or multiple disjoint interleave sets are created), some memory regions would reside on only a subset of controllers, potentially leaving bandwidth on other controllers unused and creating NUMA (Non-Uniform Memory Access) effects where some addresses are “faster” to access than othersinfohub.delltechnologies.com. Thus, from a pure bandwidth perspective, a single interleaved pool is ideal for most general-purpose workloadsinfohub.delltechnologies.com. (There are scenarios where partial or no interleaving is preferred, such as explicit partitioning of memory for real-time isolation or NUMA-aware software optimizations – these will be touched upon later.)

Interleaving Granularity: A key design parameter is the interleaving granularity, i.e. the size of address blocks alternated between memory units. This can range from very fine-grained (e.g. every cache line or 64 bytes alternating controllers) to coarse (e.g. 4KB pages or larger). Fine-grained (cache-line or sub-page) interleaving tends to maximize load balancing and parallelism, since even small memory regions engage all channels. However, very small stripes can incur overheads: for instance, successive cache lines going to different controllers might increase the number of open/close operations in each DRAM (reducing row buffer locality)users.cs.utah.edu and also require more frequent controller switching for a streaming access pattern. Coarser interleaving (e.g. at page level) keeps each page in one controller (better locality) but sacrifices parallelism for a single large data stream. Many systems choose an intermediate granularity, such as 1KB or 4KB chunks, to balance these trade-offs. For example, AMD’s programmable NoC (from its Xilinx division) allows interleaving across 2 or 4 controllers with configurable granularity (e.g. 1KB stripes)docs.amd.com. In that scheme, “alternate 1KB regions go to different DDR controllers,” and the NoC hardware will even split a burst transfer if it crosses a 1KB boundary so that each portion is sent to the appropriate controllerdocs.amd.com. This ensures that no single AXI transaction spans two controllers, simplifying coherence and ordering. Generally, the interleave granularity is aligned to typical access sizes (cache lines or pages) to avoid splitting too many requests.

Mapping Interleaved Memory onto NoC Topologies

When implementing memory interleaving in an SoC, the NoC plays a central role in routing memory requests to the correct memory controller based on the address. In a multi-controller system with a flat unified address space, each physical address is mapped to a specific memory controller, often by a simple modulo or hashing on certain address bits. For instance, in a system with two controllers interleaved, a particular address bit (or bits) might determine which controller holds that address. The on-chip interconnect must decode those address bits and forward the request to the corresponding controller’s port. This functionality can be integrated into the NoC routers or the memory request initiators. In an ARM mesh interconnect, for example, a component called the Home Node or Snoop Filter (HN-F) node owns a portion of the physical address space; a hashing scheme may be used to distribute addresses evenly across HN-F nodes (which correspond to cache slices or memory ports)developer.arm.com. In the AMD/Xilinx NoC mentioned earlier, each NoC Master Unit (NMU) at a cache/CPU will perform address interleaving as configured: “the NoC manages interleaving at each NoC entry point (NMU)… arranged in a strided fashion such that alternate 1K regions go to different DDR controllers”docs.amd.com. The result is that half of the memory region’s addresses map to one controller and half to another, effectively making two physical controllers behave like one larger, higher-bandwidth memory from the software’s perspectivedocs.amd.comdocs.amd.com.

Load Balancing and NoC Traffic: Interleaving naturally balances memory traffic load across multiple controllers. In a 2D mesh NoC, this means that requests from all cores are distributed across the chip rather than funneling into a single memory controller node. For example, a 64-core mesh might have 4 memory controllers placed at four quadrants of the chip; with address interleaving (or hashing) the memory requests of each core will statistically spread to all four controllers, preventing any one quadrant’s controller (and the routes leading to it) from becoming a hot spot. A concrete case is the Tilera Tile64 manycore (which used a mesh NoC): it had 4 on-chip memory controllers and employed a controller-interleaved page placement so that no 64KB page was serviced by only one controllerarcb.csc.ncsu.eduarcb.csc.ncsu.edu. In fact, on Tile64 the hardware used bits of the physical address to select the memory controller, with the effect that one could not allocate more than 64KB of contiguous physical memory on a single controller – larger allocations automatically spanned multiple controllersarcb.csc.ncsu.edu. This striping scheme ensured that memory traffic was evenly divided among the 4 controllers (each with its own DRAM channels), significantly boosting achievable memory bandwidth. The design also implemented an address hashing technique to spread accesses among DRAM banks and reduce bank conflictsarcb.csc.ncsu.edu, illustrating that interleaving can be applied hierarchically (among controllers, and among banks within each controller). The overall impact was a much higher sustainable memory throughput and smoother memory access latency distribution, since each core sees an average memory latency that is an aggregate of near and far controllers.

When comparing mesh and torus topologies, interleaving works conceptually the same way – by address partitioning – but the topological differences influence the performance of the interconnect under that traffic. In a mesh, if memory controllers are at the periphery, interleaved traffic means every core will at times need to send requests to distant edge controllers, traversing multiple hops. This can introduce noticeable latency for those accesses and consume NoC bandwidth. A fully interleaved mesh thus behaves somewhat like a distributed shared memory with uniform distribution but non-uniform physical distances (i.e. an on-chip NUMA, where the “NUMA-ness” is hidden from software by the flat address space). By contrast, a torus can mitigate some extremes: because of wrap-around links, the effective distance between any core and any controller is shorter on average, and there are typically multiple minimal paths to a given controller. This can reduce worst-case latency and avoid saturating any single path. In other words, a torus can more gracefully handle the all-to-all traffic pattern that full interleaving induces. Academic analyses have noted that a torus offers higher bisection bandwidth and lower average distance than a meshsciencedirect.com, which directly benefits memory traffic traveling across the chip. Thus, if one were to map the same memory interleaving scheme onto a torus NoC, it would generally yield lower memory access latencies under load, thanks to the topology’s richer connectivity. The trade-off is increased hardware complexity and potentially more power used in those extra links.

Local vs Interleaved Mapping: An alternative to pure interleaving is to assign each core or region primarily to a “local” memory controller (like a NUMA partition) to minimize hop count, and only use remote controllers when local memory is full. This NUMA approach was explored in research and is even configurable in some systems (e.g., “cluster-on-die” mode in Intel processors or BIOS options to turn off interleaving across sockets). However, it places the burden on software/OS to handle non-uniform regions. Awasthi et al. (PACT 2010) point out that simply allocating all data to the nearest MC might not be optimal due to load imbalance – some controllers could become overwhelmed while others are idleusers.cs.utah.eduusers.cs.utah.edu. They proposed adaptive runtime mechanisms to migrate or replicate pages between controllers, achieving significant performance gains (17–35%) over static first-touch or static interleaving policiesusers.cs.utah.edu. This highlights that the optimal strategy can depend on workload and contention: interleaving uniformly optimizes throughput, whereas localized allocation optimizes latency for certain access patterns. Many modern NoC-based SoCs therefore support flexible interleaving modes. For instance, the AMD Infinity Fabric (used in Epyc server CPUs) can be configured in BIOS either as a single memory domain interleaving across all controllers or as multiple NUMA domains where each die’s controller mostly serves its local coreschipsandcheese.com. In AMD’s older Magny-Cours architecture (two dies in one package), the system could be run in an interleaved memory mode so that legacy OSes saw a unified node, at the cost of some cross-die latencychipsandcheese.com. Ultimately, balancing NoC distance vs. memory parallelism is a key design decision, and both academia and industry have developed solutions (like sophisticated page mapping algorithms, or hashed interleaving that takes physical distance into account) to get the best of both worlds.

Evolution in Academic Literature

Techniques for memory interleaving have a long history in computer architecture. Academic literature from as early as the 1970s and 1980s discussed interleaved memory to supply multiple words per cycle to high-performance processors (e.g., in vector supercomputers)acs.pub.ro. As multiprocessors emerged, researchers noted the benefits of interleaving to allow concurrent accesses from multiple CPUs and to reduce memory bank contention. Lamport (1979) famously described the requirements for a multiprocessor’s memory system to provide a coherent view despite operations completing out of program orderresearchgate.netresearchgate.net – an issue that becomes trickier when buffering and interleaving are used to overlap memory accessesresearchgate.net. By the 1990s, cache coherence protocols and non-uniform memory access (NUMA) architectures were active research areas; interleaving was a basic assumption in many cache-coherent NUMA designs to stripe addresses across memory modules in different nodes for load balancinginfohub.delltechnologies.com.

With the rise of on-chip multiprocessors (CMPs) in the 2000s, academic focus shifted to on-chip networks and distributed cache/memory organizations. One notable thread was the introduction of distributed shared last-level caches (banks of L2/L3 across the chip) and the concept of Non-Uniform Cache Access (NUCA). Huh et al. (2007) studied a 16-core CMP with 256 L2 banks connected via a network, comparing different address-to-bank mapping policiesresearchgate.netresearchgate.net. A simple static interleaving of addresses to cache banks provided uniform load spreading, though at the cost of some remote bank accesses; they found that more dynamic policies could outperform static interleaving by keeping frequently accessed lines in closer banksresearchgate.net. This mirrors the tension between uniform interleaving and locality that also applies to distributing main memory accesses.

As on-chip networks evolved, researchers examined various NoC topologies (mesh, torus, flattened butterfly, etc.) and their impact on memory access. Dally and Towles (2001) advocated packet-switched on-chip networks and discussed how regular topologies like meshes can be used to connect processors and memories in a tiled fashion for scalability. The mesh/torus comparison has been revisited often: a recent work on NoC topology design notes that “the Torus is like [a] mesh but with wrap-around connections, reducing average path length and improving bandwidth”sciencedirect.com, reaffirming why a torus might benefit memory traffic patterns. However, many academic NoC prototypes (e.g., MIT RAW, TRIPS, Tilera) stuck with meshes for simplicity. The Tilera Tile64 (2008) is an academic-inspired commercial design that we mentioned; an academic study on manycore memory allocation noted that Tile64 uses a 64KB page size and a controller-interleaved placement, meaning no single 64KB page stays in one controllerarcb.csc.ncsu.edu. That study (Mueller et al., 2016) was examining OS-level memory allocators for manycores and had to account for Tilera’s fixed interleaving when managing memory blocksarcb.csc.ncsu.eduarcb.csc.ncsu.edu.

Another research direction looked at page coloring and permutation-based interleaving to mitigate row buffer conflicts. Zhang et al. (MICRO 2000) proposed a permutation-based page interleaving scheme that spreads out pages such that accesses have higher chance to hit open DRAM rows and exploit bank-level parallelismusers.cs.utah.edu. This indicates that beyond simply striping addresses, how you interleave (linear vs. hashed vs. permutation) can impact the efficiency of memory access – a consideration both in research and in some industry controllers (which often XOR address bits to hash across banks).

Academic interest continues in how to optimally place memory controllers in a NoC and assign addresses. For example, Balasubramonian’s group (Awasthi et al. 2010) highlighted that with multiple on-chip controllers and a large flat address space, the system inherently becomes NUMA – some memory addresses are “near” (served by a close controller) and some “far”users.cs.utah.edu. They argued that intelligent data placement or migration is required because neither pure first-touch (locality-only) nor pure round-robin interleaving (uniform-only) is universally bestusers.cs.utah.eduusers.cs.utah.edu. Their adaptive first-touch and page migration policies were an early example of hardware/software cooperative management to get both low latency and high bandwidth. Subsequent research has built on these ideas, exploring everything from memory networks (treating memory itself as a network of banks) to machine-learning-based page allocation in NUMA systems.

In summary, the academic evolution has moved from simple interleaving for bandwidth (in early multiprocessors) to more nuanced strategies that consider on-chip distances and contention. The interplay of NoC topology and memory placement is now a recognized aspect of manycore design. As core counts and memory channels increase (e.g., 100+ core chips with 8 or more memory channels), researchers have proposed using sophisticated hashing or even runtime page scheduling to map addresses to controllers in a way that minimizes NoC congestion and queuing delays at controllersusers.cs.utah.eduusers.cs.utah.edu. We see academic concepts like these being adopted in industry in various forms (hash-based interleaving, QoS-aware memory scheduling, etc.).

Industry Practices and Implementations

Leading SoC and CPU vendors have incorporated memory interleaving and NoC topology considerations into their designs, often documented in whitepapers or technical manuals:

  • ARM (Mesh NoC with Distributed Home Nodes): ARM’s high-performance cache-coherent interconnects (CCI, then CCN, and now CMN series) use mesh topologies for scalability. In ARM’s CMN-600/700 mesh, up to dozens of HN-F nodes (home nodes for cache/memory) are placed throughout the meshanandtech.com. ARM employs hashing (“striping”) of addresses across these HN-F nodes to distribute traffic. The CMN-700, for instance, supports striping across a non-power-of-2 number of memory controllers, indicating a flexible hashing mechanism to evenly map addresses even if, say, 10 or 12 controllers are useddeveloper.arm.com. The aim is to avoid any load imbalance in memory requests. Official ARM documentation provides System Address Map (SAM) programming examples where interleaving across chips or controllers can be configured (e.g., enabling 4KB interleaving across local and remote memory nodes) – ensuring that not just on-chip controllers but even memory across chiplets can be unified for software transparencydeveloper.arm.com. Products like Ampere’s Altra (80-core ARM Neoverse-N1) indeed feature an ARM mesh interconnect with memory striping and hashing; Ampere’s public documentation notes the use of a coherent mesh where addresses are interleaved across memory interfaces to maximize bandwidth (the Altra has 8 DDR controllers accessed via the mesh). The Neoverse CMN-700 specifically expanded the number of memory controller ports from 16 to 40 to accommodate designs with enormous memory bandwidth (e.g., mixing DDR5 and HBM memory)anandtech.comanandtech.com. Such designs crucially rely on interleaving to manage traffic to those many controllers. ARM’s documentation and the Socrates configuration tool allow designers to choose interleaving granularity and hashing algorithms to optimize performance for their SoC.
  • Intel (Mesh on Client/Server CPUs): Intel transitioned from ring buses to a mesh NoC in its Skylake-SP (Xeon Scalable) processors in 2017tomshardware.com. In that mesh, cores and LLC slices are arranged in a grid, and multiple memory controllers (MCs) sit along the mesh as well (e.g., up to six MCs for six DDR4 channels in Xeon). Intel Xeon processors expose a NUMA view by default (each socket is a NUMA node), but within a socket, the OS typically sees a flat memory space interleaved across all on-die controllers. The on-chip mesh and the memory controllers work together to make this transparent. Intel’s hardware uses an address hashing scheme called HAM (Hash Address Mode) in some generations to reduce hotspotting: it XORs a few address bits to more uniformly distribute accesses across the memory channels and rankssoramichi.jp. Furthermore, Intel provides BIOS options for sub-NUMA clustering (SNC) on some Xeon models, which essentially partitions the mesh and groups half the controllers with half the cores to create two NUMA domains per socket. In SNC disabled mode, memory is fully interleaved across all controllers; in SNC mode, interleaving is only within each half, improving local latency at the expense of peak bandwidth for each domain. This is a practical example of industry toggling between interleaved vs. localized memory mapping to suit different workload needs. Another Intel example is the many-core Knights Landing (Xeon Phi) processor: it had a high-bandwidth 2D mesh connecting 72+ cores and 6 memory controllers (along with MCDRAM stacks). Intel’s tuning guides recommended using quadrant/snc modes to manage latency, but when those are off, the default was an all-to-all interleaving to use all memory channelsanandtech.com.
  • AMD (Infinity Fabric and Chiplet Memory Interleaving): AMD’s Epyc processors have a modular design with multiple die “chiplets” each containing cores and a portion of the total memory controllers. The Infinity Fabric serves as a coherent interconnect between these dies. By default, each die manages the memory directly attached to it (NUMA domains), but AMD supports memory interleaving across dies (sometimes called Memory Interleaving or Memory Addressing modes in BIOS). For instance, in the older Opteron Magny-Cours (which packaged two dies in one chip), the system could be configured such that memory addresses alternate between the two dies’ controllers, creating a single contiguous memory space for the OSchipsandcheese.com. This helped “scale performance with non-NUMA aware code” by balancing memory traffic, albeit at the cost of remote memory latencychipsandcheese.com. In modern EPYC, one can choose “Channel Interleaving” (spreading addresses across the channels on a die) and “Die Interleaving” (spreading across dies). AMD’s platform guidelines often recommend keeping memory fully interleaved across all channels per socket for maximum bandwidth, unless specific NUMA optimizations are requiredabhik.xyz. On-die, AMD’s designs (like Zen 2/3) typically have multiple memory controllers (two per IO die in Epyc) and those controllers interleave at a 256-byte or 512-byte granularity across the channels. AMD’s documentation confirms the benefits: “Memory interleaving makes the participating memory controllers appear as one large pool… Memory traffic is balanced across the controllers in hardware and software does not need to determine how to place data”docs.amd.comdocs.amd.com. This quote from AMD underscores the industry’s goal: make multiple controllers look like a single high-bandwidth memory to simplify software and maximize performance. AMD (via Xilinx) also uses interleaving in its FPGA-oriented NoC as discussed, showing the concept’s broad applicability from CPUs to configurable SoCs.
  • SoC Interconnect IP (Arteris, Sonics, etc.): Dedicated NoC IP providers have long recognized the importance of multichannel memory interleaving. Sonics Inc. introduced an “Interleaved Multichannel Technology (IMT)” in 2008 as part of its on-chip interconnect offeringsdesign-reuse.comdesign-reuse.com. Sonics IMT could manage up to 8 external DRAM channels and provided user-controlled interleaving with hardware load balancingdesign-reuse.comdesign-reuse.com. It was designed to be transparent to software, presenting a unified address space and automatically dividing memory transactions among the channels. A Sonics whitepaper noted that simply having two channels without a good interleaving scheme often required burdensome software tweaks to split traffic, whereas their hardware IMT evenly divided traffic and even allowed asymmetric channel configurations with partial interleavingdesign-reuse.comdesign-reuse.com. By splitting memory bursts across multiple channels, Sonics claimed to eliminate wasted bandwidth that occurs when single-channel DDR transfers larger bursts than the typical data object size (e.g., 64-byte cache lines vs. 128-byte DDR bursts)design-reuse.comdesign-reuse.com. The interleaving ensured that those large bursts actually fetch useful data from multiple channels in parallel. Similarly, Arteris IP in its FlexNoC product line supports advanced memory interleaving features. The latest Arteris FlexNoC 4 (aimed at AI and automotive SoCs) explicitly touts “HBM2 and multichannel memory support – ideal integration with HBM2 multichannel memory controllers with 8 or 16 channel interleaving”arteris.com. This indicates that Arteris can automatically handle the address mapping for up to 8 or 16 channels of wide HBM memory, which often sits on-package. The ability to interleave across a non-power-of-two number of channels (like 6 or 10) is also important for real designs and is a feature in these IPsdeveloper.arm.com. These commercial NoC IP solutions provide designers with configurable options: for example, one can select the interleave stride (cache line, 128B, 256B, etc.), the addressing scheme (linear vs XOR hash), and whether to interleave at all or keep controllers separate. Both Sonics and Arteris emphasize that their solutions operate with low overhead and transparency, meaning they handle reordering and splitting such that from the CPU’s perspective, it’s just accessing a bigger, faster memorydesign-reuse.com. They also support mixing interleaved and non-interleaved regions — for instance, some critical memory might be fixed to a specific controller (for latency or security reasons), while bulk memory is interleaved for bandwidth.

In the GPU and high-performance accelerator domain, similar principles apply. GPUs have many memory channels (e.g., 6 or 8 GDDR/HBM channels) and they uniformly interleave across them to maximize throughput – this is typically done at a fine granularity (often at 256-byte or 512-byte boundaries) since GPU workloads stream through large memory regions. NoC topologies in GPUs vary (some use crossbar-like interconnects on-die, others a mesh for very large GPUs). NVIDIA’s recent GPUs, for example, use a hybrid ring+mesh interconnect and incorporate memory partitioning across HBM stacks – again using address hashing to distribute accesses evenly. Although details are proprietary, the concept is analogous to the SoC practices described above.

To conclude, industry practice embraces memory interleaving as a fundamental technique to boost memory performance, and the NoC topology is the backbone that makes it work in a scalable way. Mesh and torus NoCs provide the routing infrastructure to connect many distributed memory controllers; interleaving (striping addresses) is the scheme that maps the memory onto that infrastructure efficiently. Over the years, both academic research and industry implementations have converged on a few key themes:

  • Use interleaving (possibly with intelligent hashing) to maximize bandwidth and balance load across controllersdocs.amd.cominfohub.delltechnologies.com.
  • Be mindful of NoC topology and latency; if needed, allow some NUMA or clustering options to reduce average distance when bandwidth is less criticalchipsandcheese.comusers.cs.utah.edu.
  • Incorporate flexibility in the interconnect IP so designers can choose interleaving strategies per memory region or subsystem (as seen in ARM’s and Arteris’s offerings)docs.amd.comarteris.com.
  • Ensure that all of this is abstracted from software unless software explicitly wants to manage it – the goal is typically to make multiple memory channels appear as one “big fast memory” to the programmerdesign-reuse.comdocs.amd.com.

Both mesh and torus NoCs can successfully support interleaved memory with careful design. As core counts and memory channels continue to grow (with chiplet-based systems, 3D-stacked memory like HBM, etc.), these techniques are more critical than ever. Future academic work is likely to keep influencing industry – for example, research on machine-learning-guided page placement or new topologies (like 3D meshes) could further improve how we map and move data on-chip. The interplay of topology and memory interleaving will remain a rich area of optimization for SoC architects aiming to squeeze the most performance out of every byte transferred across the chip.

References:

  • Hennessy, J. L., & Patterson, D. A. Computer Architecture: A Quantitative Approach (5th Ed.) – discusses memory interleaving in the context of improving bandwidth (multiple words per cycle)acs.pub.ro.
  • Dell Technologies, Memory Population Rules for 3rd Gen Intel Xeon Scalable – explains memory interleaving benefits for bandwidth by using all DIMMs/channels in one setinfohub.delltechnologies.cominfohub.delltechnologies.com.
  • AMD (Xilinx) NoC Architecture, PG313 Network-on-Chip – describes two/four-controller interleaving presenting a unified address space, with 1KB stripes alternated across controllers and automatic load balancing in hardwaredocs.amd.comdocs.amd.com.
  • Sonics Inc., Press Release (2008) – introduces the IMT interleaving technology for on-chip memory controllers, dividing traffic evenly among up to 8 DRAM channels and operating transparently to softwaredesign-reuse.comdesign-reuse.com.
  • Arteris IP, FlexNoC 4 Announcement (2018) – highlights support for HBM2 and multi-channel memory with 8 or 16-channel interleaving, and automated mesh/torus topology generation for AI SoCsarteris.comarteris.com.
  • Awasthi, M. et al. (PACT 2010) – “Handling the Problems and Opportunities Posed by Multiple On-Chip Memory Controllers”; discusses flat address space across multiple on-chip MCs causing NUMA effects and proposes adaptive page allocation to improve on naive interleaving or first-touch, yielding up to 35% speedupusers.cs.utah.eduusers.cs.utah.edu.
  • Mueller, F. et al. (ARCS 2016) – “Reducing NoC and Memory Contention for Manycores”; notes Tilera Tile64’s 4 MCs with 64KB pages controller-interleaved, and uses address hashing to increase bank-level parallelismarcb.csc.ncsu.eduarcb.csc.ncsu.edu.
  • Chips and Cheese tech blog, AMD Magny Cours and HyperTransport (2025) – describes how AMD allowed interleaving memory across two dies to present a unified memory space for software, improving performance for code not optimized for NUMAchipsandcheese.com.
  • Sciencedirect (J. of Supercomputing, 2025) – notes that a torus network’s wrap-around links reduce average path length versus a mesh, which can improve memory access latency and network bandwidthsciencedirect.com.
  • AnandTech, Arm Neoverse V1/N2 and CMN-700 (2021) – details the ARM CMN-700 mesh, supporting up to 40 memory controllers and anticipating usage of both DDR and HBM memory with adequate interleaving/hashing to manage trafficanandtech.comanandtech.com.
  • Patterson, D. A., & Hennessy, J. L. – “Memory Systems and Interleaving” (in earlier editions) – foundational explanation of memory bank interleaving and its use in pipeline and vector processors (not directly cited above, but classic textbook treatment).
반응형

'System-on-Chip Design > NoC' 카테고리의 다른 글

Memory Interleaving Granularity and Data Splitting in ARM NoC Systems  (3) 2025.08.03
Simulation Example  (3) 2025.06.16
Simulation  (2) 2025.06.16
Performance Analysis  (2) 2025.06.16
Buses  (0) 2025.06.16
반응형

이제까지 네트워크 설계와 시뮬레이션 도구에 대해 살펴보았으므로, 본 장에서는 여러 시뮬레이션 예제를 제시한다. 이 예제들은 특정 네트워크나 라우터 설계의 세부 연구를 위한 것이 아니다. 대신, 일반적인 interconnection network에서 수행할 수 있는 유용한 실험들을 소개하고, 흥미롭고 때로는 직관에 반하는 결과들을 강조하기 위한 것이다.

본 장의 모든 시뮬레이션은 Appendix C에 기술된 상세한 flit-level simulator를 사용해 수행되었다. 별도로 명시하지 않는 한, 라우터는 input-queued이며 input speedup은 2이고, virtual-channel flow control을 사용한다. 각 입력 포트마다 8개의 virtual channel이 존재하며, 각 virtual channel에는 8개의 flit buffer가 있어, 입력 포트당 총 64개의 flit이 버퍼링된다. 모든 packet의 길이는 20 flit이다. virtual-channel 할당과 switch 할당은 모두 iSLIP 알고리즘을 사용해 수행된다. 현실적인 파이프라인 구조가 가정되며, 라우터의 hop당 지연(latency)은 3 cycle이다.


25.1 라우팅

이전 장들에서 보았듯이, 라우팅은 낮은 offered traffic에서는 짧은 latency를, 트래픽이 증가함에 따라 높은 saturation throughput을 유지하려는 섬세한 균형을 요한다. 먼저 라우팅 알고리즘이 latency에 미치는 영향을 살펴보고, zero-load latency 및 ideal throughput이라는 간단한 지표와 실제 네트워크 성능 사이의 관계를 분석한다. 흥미롭게도, 라우팅 알고리즘마다 그 이상적 성능에 얼마나 근접하는지가 다르다. 일반적인 평균 latency 지표뿐 아니라, 라우팅 알고리즘에 따라 발생하는 message latency의 분포도 함께 살펴본다. 이어지는 두 번째 실험 세트에서는 라우팅 알고리즘의 throughput에만 초점을 맞추고, 무작위로 생성된 traffic pattern에서 두 알고리즘의 성능을 비교한다.


25.1.1 Latency

이번 실험에서는 8-ary 2-mesh 구조에서 라우팅이 latency에 미치는 영향을 살펴본다. 먼저 interconnection network 연구에서 가장 흔히 사용되는 그래프인, uniform traffic 하에서 offered traffic 대비 latency 그래프부터 시작한다. 그림 25.1은 네 가지 라우팅 알고리즘의 성능을 비교한다: dimension-order routing (DOR), Section 9.2.2와 [135]에서 설명한 randomized minimal 알고리즘 (ROMM), Valiant의 randomized 알고리즘 (VAL), 그리고 minimal-adaptive routing 알고리즘 (MAD). MAD는 Duato의 알고리즘을 기반으로 하며, deadlock-free 하위 기능으로 DOR을 사용해 구현되었다.

낮은 트래픽 상황에서는 zero-load latency가 시뮬레이션 latency를 정확히 예측한다. flit이 채널을 통과하는 데 걸리는 시간을 cycle 단위로 정의하면, 라우터의 지연은 tr=3tr = 3 cycle이고, packet 길이가 20 flit이므로 serialization latency는 20 cycle이다. 예를 들어, minimal 알고리즘의 zero-load latency는 다음과 같이 계산된다:

T0=tr⋅Havg+Ts=3(163)+20=36 cyclesT_0 = tr \cdot H_{avg} + T_s = 3 \left( \frac{16}{3} \right) + 20 = 36\ \text{cycles}

이는 그림에 표시된 값과 같다. 마찬가지로, VAL의 zero-load latency는 52 cycle로 계산된다. 물론 트래픽이 증가하면 contention latency가 지연의 주요 요소가 되고, 각 latency 곡선의 수직 점근선은 각 라우팅 알고리즘의 saturation throughput에 의해 결정된다.

flow control을 정확히 모델링하고 있으므로, 라우팅 알고리즘들은 이상적인 throughput보다 낮은 수준에서 saturation이 발생한다. minimal 알고리즘(DOR, ROMM, MAD)의 경우 이상적으로는 네트워크 용량의 100%까지 수용할 수 있지만, Valiant의 경우 이상적으로는 그보다 낮다.

 

DOR은 전체 용량의 약 90%에 근접하고, ROMM과 MAD는 약 75%에 도달한다. 이러한 차이는 ROMM과 MAD 알고리즘에서 deadlock을 방지하기 위해 virtual channel을 분할했기 때문이다. DOR은 mesh 구조에서 본래 deadlock-free이기 때문에, 모든 경로가 자유롭게 virtual channel을 사용할 수 있다. 대부분의 분할 구조에서 그러하듯이, 이는 부하 불균형(load imbalance)을 유발할 수 있으며, 그로 인해 ROMM과 MAD 알고리즘의 실제 throughput이 낮아지는 결과를 초래한다. VAL도 역시 deadlock 방지를 위해 자원을 분할해야 하지만, 자연스러운 부하 분산(load balancing) 특성 덕분에 이상적인 값의 약 85%까지 도달할 수 있다.

그림 25.2는 동일한 토폴로지와 네 가지 라우팅 알고리즘에 대해 transpose traffic pattern 하에서 latency 대 offered traffic 곡선을 보여준다. 이 비대칭적인 트래픽 패턴은 부하를 분산하기 어려워 DOR의 성능이 낮으며, 약 35% 용량에서 saturation이 발생한다. ROMM은 약 62%까지 throughput을 끌어올리며 다소 개선된 성능을 보인다. 그러나 MAD는 모든 알고리즘을 능가하며 DOR보다 두 배 이상 높은 throughput을 달성하고 75% 이상의 용량에서 saturation이 발생한다. VAL도 DOR보다는 나은 성능을 보이며 약 43%의 용량까지 도달한다. 이처럼 transpose와 같은 어려운 트래픽 패턴에서의 성능도 중요하지만, neighbor traffic과 같이 쉬운, 지역적인 트래픽 패턴에서의 성능도 중요하다. 그림 25.3은 neighbor traffic 하에서의 결과를 보여준다. 이 경우 minimal 알고리즘들은 모두 동일한 이상적인 throughput을 가지지만, DOR은 그 단순함과 본래의 deadlock-free 특성 덕분에 ROMM과 MAD보다 유리하다. 예상대로 VAL은 앞의 두 트래픽 패턴과 같은 성능을 보인다.

앞의 세 실험에서는 평균 latency를 사용했지만, 개별 packet latency를 살펴보면 특정 시뮬레이션에서 관찰되는 latency 범위와 분포에 대한 통찰을 얻을 수 있다. 예를 들어, 그림 25.4는 uniform traffic 하에서 dimension-order routing을 사용하고 네트워크 용량의 20% offered traffic 하에서 (0,0)에서 (0,3)까지, 그리고 (0,0)에서 (4,4)까지 전송된 packet의 latency 분포를 보여준다. 낮은 부하에서는 거의 contention이 발생하지 않으며, 대부분의 packet은 최소한의 cycle만에 전달되며, 이는 그래프의 왼쪽 끝에 나타나는 큰 스파이크로 표현된다.

(0,0)에서 (4,4)까지의 packet은 hop 수가 많아 latency가 증가한다.

같은 조건에서 Valiant의 라우팅 알고리즘을 사용해 시뮬레이션을 수행하면 보다 흥미로운 분포를 보인다(그림 25.5). 각 packet은 무작위 중간 노드를 경유하기 때문에 path 길이에 다양성이 생긴다. 특정 경로 길이마다 dimension-order routing과 유사한 분포가 생성되며, Valiant 알고리즘의 전체 latency 분포는 이 여러 분포의 가중 합(superposition)으로 형성된다. 예를 들어, (0,0)에서 (0,3)으로의 라우팅에서는 대부분의 packet이 비최소 경로를 따라가므로 종 모양(bell shape)의 분포가 나타난다. (0,0)에서 (4,4)까지의 packet은 source와 destination 간 거리가 더 멀기 때문에 중간 노드가 최소 사분면(minimal quadrant)에 속할 확률이 높아지고, 이 경우 전체 경로가 최소 경로가 되어 분포가 왼쪽으로 이동하는 현상이 발생한다.


25.1.2 Throughput 분포

이제부터는 라우팅 알고리즘의 throughput 성능에만 초점을 맞춘다. interconnection network를 평가할 때 일반적으로 사용하는 표준 트래픽 패턴은 네트워크의 극단적인 상황에서의 성능은 잘 보여주지만, 평균적인 동작 특성을 평가하기에는 적절하지 않을 수 있다. 이러한 제한을 보완하기 위해, 다양한 무작위 permutation traffic pattern에 대해 throughput을 평가할 수 있다.

이 실험에서는 8-ary 2-cube 네트워크에서 dimension-order와 minimal adaptive routing의 throughput을 500개의 무작위 permutation 샘플에 대해 측정하였다. 그 결과는 그림 25.6에 나와 있다. dimension-order routing은 약 27%와 31% 지점에서 두 개의 뚜렷한 피크를 가지며, 전체 평균 throughput은 약 29.4%이다. 반면, minimal adaptive routing은 보다 고르게 분포되며 33% 근처에서 하나의 넓은 피크를 가지며, 평균 throughput은 약 33.3%이다. 이는 dimension-order routing보다 약 13.3% 높은 성능을 나타내며, adaptive routing이 제공하는 잠재적 이점을 보여준다.

 

virtual channel router를 설계할 때 일반적으로 고정된 양의 하드웨어 리소스를 virtual-channel buffer 구현에 할당한다. 설계자는 이러한 리소스를 어떻게 분할해야 네트워크 성능을 극대화할지를 결정해야 한다. 예를 들어, 깊은 buffer를 가진 적은 수의 virtual channel이 얕은 buffer를 가진 많은 virtual channel보다 더 나은 성능을 제공할 수 있을까?

그림 25.7은 8-ary 2-mesh 네트워크에서 다양한 virtual-channel 분할 방식에 대한 성능을 보여준다. 각 구성에서 virtual channel 수와 각 virtual channel의 깊이를 곱한 전체 buffer 용량은 일정하게 유지된다. 이 실험에서는 몇 가지 경향을 관찰할 수 있다. 첫째, virtual channel 수가 증가함에 따라 throughput이 증가하는 경향이 있다. 그래프에는 나타나지 않았지만, 8개의 virtual channel 구성은 4개 구성보다 약간 더 높은 saturation throughput을 가진다. 둘째, virtual channel 수가 증가하면 saturation 이하에서는 latency가 증가한다. 이는 virtual channel이 많을수록 packet의 interleaving이 심해져서, 네트워크 전체에서 packet이 "늘어나는" 현상이 발생하기 때문이다. 이와 같은 interleaving 효과는 switch allocator가 block되지 않았고 이전 라운드에서 할당을 받은 packet에 우선순위를 부여함으로써 줄일 수 있다. 다만, 패킷 길이가 가변적인 경우 starvation이나 fairness 문제를 피해야 하므로 설계자가 주의해야 한다.

throughput 경향의 예외는 16개의 virtual channel을 사용하는 경우이다. 이 구성의 zero-load latency가 다르다는 점이 근본적인 문제를 나타낸다. router 모델이 pipelining 지연을 포함하고 있기 때문에, buffer의 credit loop latency는 한 cycle보다 크다. 게다가 virtual-channel buffer의 깊이가 이 지연을 감당하지 못할 만큼 작아지면, virtual channel은 100% utilization을 유지하지 못하고 credit을 기다리며 stall 상태에 빠진다. 이는 16.3절에서 설명한 것과 동일한 현상이며, zero-load latency와 saturation throughput 모두에 영향을 준다.

virtual channel의 하드웨어 비용을 전체 buffer 용량으로 근사하는 것이 타당할 수는 있지만, virtual channel 수를 늘린다고 해서 latency 측면에서 항상 무료로 이득을 보는 것은 아니다. 일반적으로 virtual channel 수가 증가하면 virtual-channel 할당 시간이 길어지며, 이는 router의 파이프라인 처리에 영향을 준다. 이로 인해 zero-load latency가 증가하고, virtual channel 재할당에 더 많은 시간이 소요되어 saturation throughput이 약간 감소할 수 있다. 앞서 언급한 credit loop를 고려할 만큼 충분한 깊이가 없는 shallow buffer와 많은 virtual channel 조합은 종종 성능의 한계를 만든다. 그럼에도 불구하고 non-interference와 같은 다른 중요한 고려 사항 덕분에 이러한 극단적인 분할 방식도 여전히 설계에서 매력적인 선택이 될 수 있다.


25.2.2 네트워크 크기 (Network Size)

네트워크 크기는 이상적인 throughput 대비 실제 달성 가능한 throughput 비율에 큰 영향을 줄 수 있다. 그림 25.8은 uniform traffic 조건에서 네 가지 mesh 네트워크의 latency 대 offered traffic 곡선을 보여준다. 각 네트워크는 동일한 채널 크기와 동일한 라우터를 사용하므로, 네트워크의 전체 용량은 radix에 의해 결정된다. 예를 들어, 4-ary 3-mesh와 4-ary 4-mesh의 용량은 4b/k = b이고, 8-ary 2-mesh는 4b/k = b/2로, radix-4 네트워크의 절반 수준이다.

겉보기에는 동일한 라우터로 구성된 다양한 네트워크가 비슷한 수준의 capacity fraction을 달성할 것처럼 보이지만, 실험 결과는 그렇지 않다. 실제로는 capacity fraction이 일정하지 않고, 네트워크의 radix에 따라 결정되는 경향이 있다. radix-4 네트워크는 약 65%에서 saturation이 발생하고, radix-8은 약 80%, radix-16은 약 83%에서 saturation이 발생한다. 추가 실험을 통해, 실제 달성 throughput은 radix에 의해 결정된다는 경향이 확인된다.

이러한 결과는 네트워크 크기와 flow control 간의 상호작용으로 설명할 수 있다. capacity의 비율로 표현하면 네트워크 간 성능 비교가 쉬워지지만, 각 노드에서 실제로 주입되는 절대 throughput은 감춰진다. 앞서 언급한 바와 같이, radix-4 네트워크의 노드는 8-ary 2-mesh보다 2배, 16-ary 2-mesh보다는 4배 더 많은 트래픽을 주입할 수 있다. 시뮬레이션에서 각 노드의 injection process는 동일하지만, 전체 네트워크의 트래픽은 크게 다르다. radix-4 네트워크에서는 적은 수의 노드가 많은 트래픽을 생성하고, 큰 radix 네트워크에서는 더 많은 노드가 적은 양의 트래픽을 생성한다. 이러한 트래픽 특성 차이는 flow control에 큰 영향을 미친다. 소수의 강한 source는 순간적인 load(즉 burstiness)를 많이 발생시키며, 많은 수의 약한 source는 burstiness가 작다. 이 burstiness를 완화하는 것이 flow control의 역할이며, 이 성능은 노드당 buffering 양에 따라 달라진다. 동일한 라우터를 사용하는 상황에서는, 작은 radix 네트워크는 burst 완화 능력이 떨어져 전체 capacity의 더 낮은 비율만 달성할 수 있게 된다.


25.2.3 Injection Process

앞의 실험에서처럼 네트워크 크기가 성능에 영향을 줄 수 있으며, 이때 traffic의 burstiness가 중요하게 작용한다. 본 절에서는 injection process의 burstiness를 직접 조절함으로써 flow control 성능에 미치는 영향을 더 자세히 살펴본다.

burstiness의 가장 단순한 원천은 packet의 크기이다. 평균 주입률이 매우 낮은 경우라도, 주입 단위는 항상 packet이며, 이 packet은 다수의 flit으로 구성될 수 있다. 이는 특정 노드로 향하는 flit의 burst라고 볼 수 있다. 그림 25.9는 packet size가 네트워크 성능에 미치는 영향을 보여준다.

데이터에서 관찰되는 주요 경향은 packet size가 커질수록 latency는 증가하고 throughput은 감소한다는 것이다. 큰 packet은 serialization overhead가 크기 때문에 본질적으로 latency 측면에서 불리하며, 이는 zero-load latency에서도 확인된다. 또한, flow control은 완벽하지 않기 때문에, packet이 커질수록 자원 활용이 어려워진다. 예를 들어, packet size가 40 flit인 경우, 각 라우터의 buffer 깊이가 8 flit이므로 최소 5개의 라우터에 걸쳐 분산된다. 이 packet이 일시적으로 block되면 5개 라우터의 자원이 모두 block되며, 이로 인해 saturation throughput이 감소하게 된다.

이러한 전반적인 경향에서 벗어나는 예외는 packet size가 1인 경우이다 (PS = 1). 본 시뮬레이션의 router 모델은 Section 16.4의 그림 16.7(a)에 제시된 보수적인 virtual channel 재할당 방식을 채택하므로, virtual channel을 재사용하기까지 여러 cycle이 필요하다. packet이 작을수록 이 재할당 시간의 비중이 커지며, 결국 router에서 실질적으로 사용 가능한 virtual channel 수가 감소한다. 추가 시뮬레이션에서는 이 현상이 재할당 시간을 줄이거나 virtual channel 수를 늘렸을 때 사라지는 것이 확인되었다.

또 다른 injection process의 영향은 Section 24.2.2에서 설명한 2-state MMP(Markov Modulated Process)를 사용한 mesh network로부터 탐색할 수 있다. 이 실험에서도 packet 크기는 20 flit로 고정되며, 그림 25.10은 여러 MMP 파라미터 값에 따른 성능을 보여준다. 각 MMP는 두 개의 파라미터 α와 β를 가지며, burst의 간격과 지속 시간을 조절한다. 1/α는 burst 사이의 평균 간격을 의미하며, 각각의 곡선은 1, 200, 400 cycle의 평균 간격을 가진다. β는 평균 burst 지속 시간의 역수로 해석할 수 있으므로, 첫 번째 곡선은 무한 burst 지속 시간, 두 번째와 세 번째 곡선은 각각 100, 50 cycle의 평균 지속 시간을 가진다.

첫 번째 MMP는 무한 burst 지속 시간을 가지므로 항상 on 상태에 있으며, 결국 Bernoulli injection process와 동일하다. 따라서 이 MMP의 α 파라미터는 steady state에 영향을 미치지 않는다. Section 24.2.2의 분석에 따르면, burst 기간 중의 injection rate는 평균 injection rate의 1 + β/α 배이다. β/α 비율이 클수록 burst의 강도가 높아진다.

 

Figure 25.10은 여러 MMP 조건에서 8-ary 2-mesh 네트워크의 latency 대 offered traffic 곡선을 보여준다. 세 번째 MMP에서는 β/α = 0.02 / 0.0025 = 8이므로 burst 기간 동안의 주입률은 평균보다 9배 높다. 예를 들어, 전체 offered load가 40%일 경우, burst 구간에서는 주입률이 120%까지 치솟고, 평상시에는 packet이 주입되지 않는다. 이러한 bursty한 행동은 평균 latency를 증가시키고 saturation throughput을 감소시키는 결과를 낳는다.


25.2.4 Prioritization

대부분의 네트워크 latency 평가는 message의 평균 latency에 초점을 맞추지만, flow control 방식에 따라 개별 message의 latency 분포는 크게 달라질 수 있다. 이런 분포를 제어하는 것은 최악 지연(worst-case delay)과 지터(jitter)에 민감한 애플리케이션, 공정성(fairness), 또는 메시지 우선순위가 존재하는 네트워크에서 매우 중요하다.

예를 들어, 하나의 네트워크에 두 개의 메시지 클래스가 있다고 하자. 하나는 낮은 지연과 지터가 요구되는 실시간 영상 트래픽이고, 다른 하나는 지연 허용이 가능한 데이터 전송일 수 있다. 이 경우 실시간 트래픽의 요구 조건을 만족시키기 위해 높은 우선순위를 부여한다.

Figure 25.11은 2-ary 6-fly 네트워크에서 두 개의 우선순위 클래스를 적용한 실험 결과다. 여기서 전체 트래픽의 10%는 high-priority, 나머지 90%는 low-priority이며, 네트워크는 saturation 부근에서 동작한다. 라우터 모델에는 Section 19.3의 separable allocator가 사용되며, prioritized arbiter는 가장 높은 우선순위를 가진 요청자를 선택하고, 동률일 경우 round-robin으로 결정한다.

결과적으로 high-priority 메시지의 약 71%가 네트워크의 최소 지연인 37 cycle에 전달되며, 99%는 70 cycle 이내에 전달된다. 반면 low-priority 트래픽은 평균 latency가 높고, 98%는 300 cycle 이내에 도착하지만, tail은 700 cycle 이상까지 이어진다.

이러한 차별화 수준은 high-priority 트래픽이 전체 트래픽의 소수(10% 미만)인 경우에만 가능하다. high-priority 트래픽이 증가하면 그 효과는 줄어들며, 전체 트래픽이 대부분 high-priority로 채워지면 우선순위 부여의 이점은 거의 사라진다.

또 하나의 중요한 방식은 Section 15.4.1에서 설명한 age-based fairness이다. 이 방식에서도 prioritized allocator를 사용하는데, 이번에는 요청자 중 가장 오래된 packet에게 우선권을 부여한다. packet의 age는 네트워크에 주입된 이후 경과한 cycle 수로 측정된다.

Figure 25.12는 age-based arbitration을 사용한 경우와 사용하지 않은 경우의 latency 분포를 비교한 것이다. age-based arbitration이 없을 경우 일부 packet은 600 cycle 이상 걸리며 tail이 길어지지만, 있을 경우 tail이 짧아지고 대부분 packet이 400 cycle 이내에 도착한다. 다만, 평균 latency는 약간 증가할 수 있다.


25.2.5 Stability

네트워크가 saturation 부근 또는 그 이상에서 동작하게 되면, 설계자는 latency보다는 flow control의 **공정성(fairness)**에 더 관심을 갖게 된다. saturation 상태에서 channel이 공정하게 분배되지 않으면, 일부 flow는 starvation에 빠지고 throughput이 급감하면서 네트워크가 **불안정(unstable)**해질 수 있다.

Figure 25.13은 불안정한 네트워크와, 공정성을 확보한 두 가지 flow control 방식의 throughput을 보여준다. greedy한 flow가 starved flow를 가리는 것을 방지하기 위해, minimum throughput을 기준으로 측정하였다.

세 가지 방식 모두 saturation 이전까지는 유사한 성능을 보이며, 대략 네트워크 capacity의 43%에서 saturation이 발생한다. 그러나 그 이후에는 결과가 크게 달라진다. 아무런 fairness 메커니즘이 없을 경우 throughput은 급격히 떨어져 5% 미만으로 감소한다. 이는 Section 15.4.1의 parking lot 예제와 유사한 상황으로, 적은 hop 수로 경로를 통과하는 packet이 리소스를 더 많이 점유하게 된다.

반면 age-based arbitration을 도입하면 saturation 이후에도 매우 안정적인 throughput을 유지한다. 또한, 목적지마다 별도의 virtual channel을 사용하는 non-interfering 네트워크도 안정적이지만, saturation 이전에 throughput이 소폭 감소하며 약 35%에서 안정화된다.


25.3 Fault Tolerance

많은 interconnection network에서는 고장(fault)이 발생했을 때도 네트워크가 계속 동작할 수 있어야 하며, 이러한 고장이 발생하더라도 성능이 완만하게 감소하는 graceful degradation 특성이 중요하다. Figure 25.14는 graceful degradation의 예시이다.

이 실험에서는 8-ary 2-mesh 네트워크에서 다양한 수의 링크 고장이 발생하는 경우를 시뮬레이션하며, Section 14.8의 fault-tolerant planar-adaptive routing을 사용한다. 각 고장 수에 대해, uniform traffic 조건 하에서의 saturation throughput을 측정하였다.

고장 링크의 배치 방식에 따라 결과가 달라질 수 있기 때문에, throughput은 각 고장 수마다 30개의 고장 조합 평균으로 구해졌으며, 표준편차도 함께 그래프에 표시되었다. 실험 단순화를 위해 고장 영역이 convex가 되도록 링크를 배치했으며, 이 조건 하에서는 모든 노드가 planar-adaptive routing을 통해 연결된 상태로 유지된다.

 

Figure 25.14는 uniform traffic 하에서 fault-tolerant planar-adaptive routing을 사용하는 8-ary 2-mesh 네트워크에서, 실패한 링크 수에 따른 saturation throughput을 보여준다. 각 점은 임의로 고른 링크 고장 샘플 30개에 대한 평균 throughput을 나타내며, 수직 에러 바는 표준편차를 나타낸다. 에러 바의 상단과 하단은 각각 평균값에 표준편차를 더한 값과 뺀 값을 의미한다.

링크 고장이 없는 정상적인 네트워크의 throughput은 capacity의 약 60%를 약간 상회하며, 소수의 고장이 발생했을 때 throughput이 소폭 감소하는 것으로 보아 네트워크는 여전히 견고함(grace)을 유지한다. 고장 수가 12개로 증가해도 throughput 감소율은 크게 증가하지 않아, 네트워크가 비교적 높은 탄력성(resilience)을 유지함을 알 수 있다. 동시에 표준편차는 고장 수가 많아질수록 점차 증가하는데, 이는 단일 고장보다 고장들이 모여 있을 때 잠재적으로 더 큰 영향을 미칠 수 있음을 나타낸다.

예를 들어, 네트워크의 특정 소영역에 많은 고장이 집중되면 해당 영역에 접근할 수 있는 채널 수가 줄어들어, 남은 채널에 과도한 부하가 집중될 수 있다. 고장 수가 많아질수록 이런 고장 클러스터가 어떤 노드를 거의 고립시키는 경우가 발생할 확률도 증가하며, 극단적인 경우 특정 노드 쌍 사이에 경로가 아예 존재하지 않는 partitioned 상태에 이를 수도 있다.

반응형

'System-on-Chip Design > NoC' 카테고리의 다른 글

Memory Interleaving Granularity and Data Splitting in ARM NoC Systems  (3) 2025.08.03
Network-on-Chip Topologies and Memory Interleaving in SoC Design  (1) 2025.08.03
Simulation  (2) 2025.06.16
Performance Analysis  (2) 2025.06.16
Buses  (0) 2025.06.16

+ Recent posts