.
-as of [30 SEPTEMBER 2024]–
.
.
-a multi-core processor is a ‘single computing component’ with 2 or more ‘independent actual processing units’ (called “cores”), which are ‘units’ that ‘read’ + ‘execute’ program instructions-
.
(the ‘instructions’ are ‘ordinary CPU instructions’ (such as ‘add’ / ‘move data’ / ‘branch’), but the multiple ‘cores’ can run multiple ‘instructions’ at the same time, increasing ‘overall speed’ for programs amenable to ‘parallel computing’)
(manufacturers typically integrate the ‘cores’ onto a ‘single integrated circuit die’ (known as a ‘chip multi-processor’ / ‘CMP’)
(…or onto multiple dies in a ‘single chip package’)
(a ‘multi-core processor’ implements ‘multi-processing’ in a ‘single physical package’)
(designers may couple ‘cores’ in a multi-core device ‘tightly’ or ‘loosely’)
(for example, ‘cores’ may or may not share ‘caches’, and they may implement ‘message passing’ or ‘shared-memory inter-core’ communication methods)
.
(common ‘network topologies’ to interconnect ‘cores’ include…)
‘bus’
‘ring’
‘2-dimensional mesh’
‘crossbar’
.
(‘homogeneous multi-core systems’ include only ‘identical cores’)
.
(‘heterogeneous multi-core systems’ have ‘cores’ that are not ‘identical’)
.
(e.g. ‘big.LITTLE’ have heterogeneous ‘cores’ that shares the same ‘instruction set’, while ‘AMD Accelerated Processing Units’ have cores that don’t even share the same ‘instruction set’)
.
(just as with ‘single-processor systems’, ‘cores’ in ‘multi-core systems’ may implement ‘architectures’ such as…)
‘VLIW’
‘super-scalar’
‘vector’
‘multi-threading’
.
(‘multi-core processors’ are widely used across many ‘application domains’, including…)
‘general-purpose’
’embedded’
‘network’
‘digital signal processing’
(DSP)
‘graphics’
(GPU)
.
(the improvement in performance gained by the use of a ‘multi-core processor’ depends very much on the ‘software algorithms’ used + their implementation)
(in particular, possible gains are limited by the fraction of the software that can run in parallel simultaneously on multiple ‘cores’
(this effect is described by ‘amdahl’s law’)
(in the best case, so-called ’embarrassingly parallel problems’ may realize speedup factors near the # of ‘cores’, or even more if the problem is split up enough to fit within each core’s cache(s), avoiding use of much slower ‘main-system memory’)
.
(most applications, however, are not ‘accelerated’ so much unless programmers invest a prohibitive amount of effort in re-factoring the whole problem)
.
(the ‘parallelization’ of ‘software’ is a significant ongoing ‘topic of research’)
.
.
*๐จโ๐ฌ๐ต๏ธโโ๏ธ๐โโ๏ธ*SKETCHES*๐โโ๏ธ๐ฉโ๐ฌ๐ต๏ธโโ๏ธ*
.
๐๐|/\-*WIKI-LINK*-/\|๐๐
.
.
๐๐๐โ*CPU* โ ๐๐๐
.
.
๐๐๐๐๐ค๐๐ค๐๐ค๐๐คโค๏ธ๐๐๐งกโฃ๏ธ๐๐๐โฃ๏ธ๐งก๐๐โค๏ธ๐ค๐๐ค๐๐ค๐๐ค๐๐๐๐
.
.
*๐โจ *TABLE OF CONTENTS* โจ๐ท*
.
.
๐ฅ๐ฅ๐ฅ๐ฅ๐ฅ๐ฅ“we won the war”๐ฅ๐ฅ๐ฅ๐ฅ๐ฅ๐ฅ