Daggerhashimoto

 

    Dagger Hashimoto was a forerunner in terms of research, implementation, and specification for the Ethereum 1.0 mining algorithm, albeit it has since been superseded by Ethash.

    This article's data will be archived for historical purposes.

    Dagger Hashimoto aspires to achieve two objectives at the same time:

    1    ASIC-resistance: the gain from developing specialized hardware for the algorithm should be as modest as possible, preferably to the point that, even in an environment where ASICs have been produced, the speedup is tiny enough that users on conventional computers may still mine with spare CPU power.
    2    Verifiability by a light client: a block should be relatively easy to verify by a light client.

    With one more change, we may indicate how to achieve a third goal if desired, but at the penalty of more complexity:


    Storage of the entire blockchain state: mining should necessitate the storage of the entire blockchain state (due to the irregular structure of the Ethereum state trie, we anticipate that some pruning will be possible, particularly of some often-used contracts, but we want to minimize this).


    DaggerHashimoto draws on two fundamental works from the past:  

  • Thaddeus Dryja's Hashimoto method aims to overcome ASIC resistance by being IO-bound, i.e. making memory reads the limiting element in the mining process.
    According to the theory, RAM is inherently a much more generic ingredient than computation, and billions of dollars have already been invested in optimizing it for various use cases, many of which involve near-random access patterns (hence the term "random access memory"); thus, existing RAM is likely to be moderately close to optimal for evaluating the algorithm.
    Hashimoto utilizes the blockchain as a data source, satisfies (1) and (3) at the same time. 
  • Dagger is a Vitalik Buterin method that employs directed acyclic networks to provide memory-hard computation and memory-easy validation at the same time. The basic idea is that each nonce only needs a small fraction of a huge total data tree, and recomputing the subtree for each nonce is prohibitively expensive for mining (thus the need to store the tree), but OK for a single nonce's worth of verification. Dagger was created as a replacement for current memory-hard algorithms such as Scrypt, which are memory-hard but also extremely difficult to verify when their memory-hardness is enhanced to really secure levels. Sergio Lerner demonstrated that Dagger was vulnerable to shared memory hardware acceleration, hence it was discontinued in favor of other research lines.

    Approaches that Dagger and DaggerHashimoto have tested but are not now our major focus include:

  •  "Blockchain-based proof of work" is a proof of work function that uses the blockchain to perform contracts. Because attackers may build forks and supply them with contracts for which they have a hidden fast "trapdoor" execution mechanism, the concept was abandoned due to long-range attack weaknesses.
  • "Random circuit" is a proof of work function developed primarily by Vlad Zamfir that includes producing a new program every 1000 nonces - effectively, picking a new hash function each time, quicker than FPGAs can reconfigure. The technique was momentarily shelved since it was unclear what mechanism might be used to produce random programs that are generic enough to yield minimal specialization benefits; nevertheless, we see no fundamental reasons why the notion cannot be implemented. 

    Dagger Hashimoto differs from Hashimoto in that, rather than using the blockchain as a data source, daggerHashimoto employs a custom-generated 1 GB data set that updates every N blocks depending on block data. The Dagger method is used to construct the data set, which allows the light client verification algorithm to efficiently calculate a subset particular to each nonce. The distinction between DaggerHashimoto and Dagger is that, unlike the original Dagger, the dataset used to query the block is semi-permanent, updated just once in a while (eg. once per week). Sergio Lerner's ideas on shared memory speedups become insignificant as a result of this. 


2 Comments

Previous Post Next Post