ATG Press Scratchpad

The ATG Press Scratchpad includes the brief descriptions of some mostly completed research works at ATG, which have not been documented yet. It is expected that each of the following subjects to be cast into a few papers that would be appeared in any or all of the three mentioned categories of publication. The followings are the proposed descriptions which were updated on Aug 16, 2007:
 


Theory of Reduction in Computer Architecture (director: Farnad Laleh; publication date: not determined; publication form: not determined)

Description:
The computational power of every computational system depends on the structure of its instruction set. All of the deterministic computational system cloud be reduced to each other. The most important computational systems, which are the base of many computer science works, are the Turing and random access machines (RAM).
Generally, the computational power shows the order of the complexity of a computation on a computational system: the lower complexity order, the more computational power. When we reduce different computational structures into each others, the complexity order or computational power will be changed. For example, the reduction of RAM to Turing machine increases the complexity order by a factor of power of three.
In addition to the computational power, there is another factor that we call it the computational performance. The proposed factor deals with the constants in any complexity order formula. It means that a computational structure with a determined computational power could have different computational performances, due to the different constants in its complexity order formula. A well-known example for this case is implementation of an arbitrary RAM with pipelining and without pipelining. Both of the implementations have the same computational powers, but different computational performances.
As with the computational power, we can consider a reduction procedure in computational performance. To this respect, we introduce extended random access machine (ERAM) to model both of the computational power and performance. Afterward, we investigate the reduction procedure in ERAM's computational performance. In this work, we show that the globe scale architecture has the best computational performance between different ERAMs with the same computational power.

 



© 2000-15 Aria Technology Group