Recovering The Private Key in a ECDSA Signature Using A ...

08-13 21:45 - 'Building the Infrastructure for the Future Decentralized Financial Market, Coinbase Included HBTC.Com Debut DeFi Project - Nest Protocol' (self.Bitcoin) by /u/Nest_Fan removed from /r/Bitcoin within 24-34min

'''
As the world’s leading regulatory compliant digital asset exchange, Coinbase sets one of the most stringent requirements for digital asset listing which includes technical evaluation of projects, legal and risk analysis, market supply and demand analysis, and crypto-economics. Coinbase holds a strong reputation in the digital asset industry, and thus the “Coinbase Standard” is considered as the industry benchmark for other digital asset projects, and the market has even seen the “Coinbase effect”.
On July 25 2020, Coinbase quietly launched the pricing chart of a decentralized oracle project, NEST Protocol (NEST), into its portal. Although Coinbase has yet to announce the inclusion of the project in its evaluation list, it represents a keen interest in the DeFi sector, and particularly in the DeFi price oracle projects.
NEST Protocol is the rising star in the decentralized price oracle sector
Decentralized financial services offered by the current mainstream DeFi platforms such as MakerDAO, Compound, dYdX, etc. rely heavily on the market data provided by the oracle projects. Oracle projects act as reliable information sources to feed these price data to other DeFi Projects, connecting the price data from the centralized world to the DeFi space. As such, the price oracle is an integral part of the decentralized financial services infrastructure.
Traditionally, the price oracle collects data from different platforms and feeds these data points to the DeFi space to create data reference points to enable them to function properly. However, many problems currently exist in the DeFi space, for example, blockchain network congestion, malicious attacks, wild market fluctuations, and other factors that may cause the data given by the price oracle to deviate from the true market data. These ultimately cause users to trade on wrong information in the DeFi space and increases such transaction costs.
Decentralized finance requires a fast, secure, and reliable price oracle. The birth of the decentralized price oracle is the embodiment of the blockchain industry’s thinking, and the current market projects offering decentralized price oracle services which includes NEST Protocol, Chainlink, Band Protocol, Tellor, Witness, Oraclize, and many others.
The innovation of NEST-Price is that every data point has been agreed upon by market validators, in line with the blockchain consensus mechanism. NEST-Price synchronizes the off-chain price in a highly decentralized manner, creating real and valid price data on-chain. This is the unique differentiator between NEST-Price and other price oracles.
Compared with other price oracle projects, NEST also has other features and advantages, such as the proposed peer-to-peer quotation matching as well as its unique verifier verification structure, making NEST more resilient to malicious attacks, resulting in a more decentralized network, and it’s on-chain prices closer to the fair market price. All of this has resulted in the NEST Protocol becoming a rising star in the DeFi price oracle sector. HBTC.com selects high-quality projects to list and partnering with NEST to promote the development of DeFi ecosystem
During the selection of quality assets, exchanges like [HBTC.com]1 and Coinbase adhere to the principle of a rigorous selection of assets from different projects to enable a proper range of digital assets. At the same time, in order to solve existing pain points in the digital asset industry, which currently lacks a market-making management solution, HBTC.com also has launched its own “coin listing crowdsourcing [liquidity initiative]2 “, redefining the exchange market making model.
HBTC.com, through its coin listing strategy, effectively reduces the problem of low liquidity in the early stages of high-quality projects, ensuring the smoothness of the user experience, and achieves a win-win situation for traders, the community, and the respective trading platform. These initiatives, coupled with reliable user protection and a responsible attitude, have earned a positive reputation among users.
Since its inception, the HBTC.com exchange has been committed to the discovery of both quality and promising digital asset projects. At a time when DeFi is growing rapidly, HBTC.com has a unique perspective for the decentralized price oracle sector and has prioritized NEST as a premium partner to debut the project alongside with its global branding upgrade. In addition, HBTC.com has [100% proof of reserves]3 for traders to validate the existence of assets via the Merkle tree, which brings transparency to the extreme.
In May 2020, NEST token delivered a 883.29% of return, at its peak, after its global debut on HBTC.com. At present, HBTC Exchange addresses holding NEST token accounts in a total of 141 million, ranked first in the overall network. At the same time, the HBTC Exchange network exclusively releases NEST staking mining and data show that NEST 24-hour turnover has reached $20.4 million.
Post-listing of the NEST token, HBTC.com has also listed DeFi projects such as DF, OKS, NEST, SWTH, JST, NVT, and other DeFi projects with market potential; some projects have achieved astonishing performance in the secondary market.
HBTC.com’s path to DeFi: developing public chains to prepare for the future ecosystem breakout.
In terms of the DeFi product and ecosystem infrastructure, HBTC has deployed HBTC Chain since launched in 2018, an infrastructure designed for decentralized finance and DeFi business with patented Bluehelix decentralized cross-chain clearing and custody technology.
The HBTC Chain is the DeFi ecosystem infrastructure that the team has spent a significant amount of effort to build. It is based on decentralization and community consensus and integrates cryptography and blockchain technologies to support decentralized association-based governance capabilities at the technical level. Based on decentralized key management, combining various cryptography tools including ECDSA, commitment, zero-knowledge proof, and multi-party computation, It implements the distributed private key generation and signature for cross-chain assets among all validators. On top of that, this technology can realize light-weight and non-intrusive cross-chain asset custody. On the clearing layer, HBTC Chain employs BHPOS consensus and horizontal sharding mechanisms to achieve high-performing transaction clearing, and implementation of OpenDex protocol to help the development of the DeFi ecosystem.
In addition, with the success experience of Bluehelix Cloud SaaS and white label solutions and the HBTC Brokerage system, HBTC’s public chain also innovatively supports CEX+DEX mixed matchmaking model and OpenDex protocol and proposes the three-tier node system which consists of standard node + consensus node + core node. This structure provides HBTC public chain certain advantages in terms of performance and cross-chain transactions. Users can easily establish a DEX with OpenDex protocol at nearly zero cost, and all DEX will share the liquidity and support customized user interface and trading parameters. The trading experience can be completely comparable to centralized spot exchanges.
With the launch of its test network, it is now possible to develop various DeFi applications on the HBTC public chain, such as decentralized swap, so that private keys are not controlled by any party; no KYC, which can prevent personal information leakage; and asset security through the setting of invalidation, cancellation of transactions and other functions, cross-chain asset mappings, such as the ability to issue cross-chain cBTC or other chain tokens, fully decentralized asset mapping contracts, and 100% reserves.
Conclusion
In the past few months, the DeFi market has been extremely active, the price of DeFi tokens has been rising, and a new round of competition with the centralized exchanges has started. HBTC Chain relies on the powerful technology of Bluehelix and [HBTC.com]1 , giving all public chains the ability to interconnect, and put into both DeFi and SaaS levels. Undoubtedly, as one of the first exchanges to build the DeFi ecosystem, HBTC is leading the breakout in the current DeFi craze and has now become the first choice of users to engage with quality DeFi projects.

From BITCOIN news([[link]6 )
'''
Building the Infrastructure for the Future Decentralized Financial Market, Coinbase Included HBTC.Com Debut DeFi Project - Nest Protocol
Go1dfish undelete link
unreddit undelete link
Author: Nest_Fan
1: *btc*com/ 2: m*diu**com/hbt***ficia*/hbt*-launches-ba**liquidi*y***owd*unding-li*ti*g-plan-redefine-t*e*exch*nge-*i*tin**m*d*l***6*58f*f1d* 3: hbtc.ze**e*k*co*/hc/*n-us/a**icles/3***46287754-HBT*-10*-*ro***of*Reserve 4: hb*c.co*/ 5: n*ws.bitcoin.c*m*bu*ld*ng-t**-infr***ructur*-f*r-the*fut*re*decen**ali**d-*inanc*a*-market-coi**as*-*ncluded-h*t*-*o*-*ebut-de**-p*oject-n*st-**otocol* 6: n**s.bit*oin*com/building-th*-infrast*u*ture*for-t*e-fut****decen**a**zed**inancia*-m*rket-coinbase-**c*uded-*b*c-c***deb***defi-**oject-*est**r**ocol/]^^5
Unknown links are censored to prevent spreading illicit content.
submitted by removalbot to removalbot [link] [comments]

Bitcoin (BTC)A Peer-to-Peer Electronic Cash System.

Bitcoin (BTC)A Peer-to-Peer Electronic Cash System.
  • Bitcoin (BTC) is a peer-to-peer cryptocurrency that aims to function as a means of exchange that is independent of any central authority. BTC can be transferred electronically in a secure, verifiable, and immutable way.
  • Launched in 2009, BTC is the first virtual currency to solve the double-spending issue by timestamping transactions before broadcasting them to all of the nodes in the Bitcoin network. The Bitcoin Protocol offered a solution to the Byzantine Generals’ Problem with a blockchain network structure, a notion first created by Stuart Haber and W. Scott Stornetta in 1991.
  • Bitcoin’s whitepaper was published pseudonymously in 2008 by an individual, or a group, with the pseudonym “Satoshi Nakamoto”, whose underlying identity has still not been verified.
  • The Bitcoin protocol uses an SHA-256d-based Proof-of-Work (PoW) algorithm to reach network consensus. Its network has a target block time of 10 minutes and a maximum supply of 21 million tokens, with a decaying token emission rate. To prevent fluctuation of the block time, the network’s block difficulty is re-adjusted through an algorithm based on the past 2016 block times.
  • With a block size limit capped at 1 megabyte, the Bitcoin Protocol has supported both the Lightning Network, a second-layer infrastructure for payment channels, and Segregated Witness, a soft-fork to increase the number of transactions on a block, as solutions to network scalability.

https://preview.redd.it/s2gmpmeze3151.png?width=256&format=png&auto=webp&s=9759910dd3c4a15b83f55b827d1899fb2fdd3de1

1. What is Bitcoin (BTC)?

  • Bitcoin is a peer-to-peer cryptocurrency that aims to function as a means of exchange and is independent of any central authority. Bitcoins are transferred electronically in a secure, verifiable, and immutable way.
  • Network validators, whom are often referred to as miners, participate in the SHA-256d-based Proof-of-Work consensus mechanism to determine the next global state of the blockchain.
  • The Bitcoin protocol has a target block time of 10 minutes, and a maximum supply of 21 million tokens. The only way new bitcoins can be produced is when a block producer generates a new valid block.
  • The protocol has a token emission rate that halves every 210,000 blocks, or approximately every 4 years.
  • Unlike public blockchain infrastructures supporting the development of decentralized applications (Ethereum), the Bitcoin protocol is primarily used only for payments, and has only very limited support for smart contract-like functionalities (Bitcoin “Script” is mostly used to create certain conditions before bitcoins are used to be spent).

2. Bitcoin’s core features

For a more beginner’s introduction to Bitcoin, please visit Binance Academy’s guide to Bitcoin.

Unspent Transaction Output (UTXO) model

A UTXO transaction works like cash payment between two parties: Alice gives money to Bob and receives change (i.e., unspent amount). In comparison, blockchains like Ethereum rely on the account model.
https://preview.redd.it/t1j6anf8f3151.png?width=1601&format=png&auto=webp&s=33bd141d8f2136a6f32739c8cdc7aae2e04cbc47

Nakamoto consensus

In the Bitcoin network, anyone can join the network and become a bookkeeping service provider i.e., a validator. All validators are allowed in the race to become the block producer for the next block, yet only the first to complete a computationally heavy task will win. This feature is called Proof of Work (PoW).
The probability of any single validator to finish the task first is equal to the percentage of the total network computation power, or hash power, the validator has. For instance, a validator with 5% of the total network computation power will have a 5% chance of completing the task first, and therefore becoming the next block producer.
Since anyone can join the race, competition is prone to increase. In the early days, Bitcoin mining was mostly done by personal computer CPUs.
As of today, Bitcoin validators, or miners, have opted for dedicated and more powerful devices such as machines based on Application-Specific Integrated Circuit (“ASIC”).
Proof of Work secures the network as block producers must have spent resources external to the network (i.e., money to pay electricity), and can provide proof to other participants that they did so.
With various miners competing for block rewards, it becomes difficult for one single malicious party to gain network majority (defined as more than 51% of the network’s hash power in the Nakamoto consensus mechanism). The ability to rearrange transactions via 51% attacks indicates another feature of the Nakamoto consensus: the finality of transactions is only probabilistic.
Once a block is produced, it is then propagated by the block producer to all other validators to check on the validity of all transactions in that block. The block producer will receive rewards in the network’s native currency (i.e., bitcoin) as all validators approve the block and update their ledgers.

The blockchain

Block production

The Bitcoin protocol utilizes the Merkle tree data structure in order to organize hashes of numerous individual transactions into each block. This concept is named after Ralph Merkle, who patented it in 1979.
With the use of a Merkle tree, though each block might contain thousands of transactions, it will have the ability to combine all of their hashes and condense them into one, allowing efficient and secure verification of this group of transactions. This single hash called is a Merkle root, which is stored in the Block Header of a block. The Block Header also stores other meta information of a block, such as a hash of the previous Block Header, which enables blocks to be associated in a chain-like structure (hence the name “blockchain”).
An illustration of block production in the Bitcoin Protocol is demonstrated below.

https://preview.redd.it/m6texxicf3151.png?width=1591&format=png&auto=webp&s=f4253304912ed8370948b9c524e08fef28f1c78d

Block time and mining difficulty

Block time is the period required to create the next block in a network. As mentioned above, the node who solves the computationally intensive task will be allowed to produce the next block. Therefore, block time is directly correlated to the amount of time it takes for a node to find a solution to the task. The Bitcoin protocol sets a target block time of 10 minutes, and attempts to achieve this by introducing a variable named mining difficulty.
Mining difficulty refers to how difficult it is for the node to solve the computationally intensive task. If the network sets a high difficulty for the task, while miners have low computational power, which is often referred to as “hashrate”, it would statistically take longer for the nodes to get an answer for the task. If the difficulty is low, but miners have rather strong computational power, statistically, some nodes will be able to solve the task quickly.
Therefore, the 10 minute target block time is achieved by constantly and automatically adjusting the mining difficulty according to how much computational power there is amongst the nodes. The average block time of the network is evaluated after a certain number of blocks, and if it is greater than the expected block time, the difficulty level will decrease; if it is less than the expected block time, the difficulty level will increase.

What are orphan blocks?

In a PoW blockchain network, if the block time is too low, it would increase the likelihood of nodes producingorphan blocks, for which they would receive no reward. Orphan blocks are produced by nodes who solved the task but did not broadcast their results to the whole network the quickest due to network latency.
It takes time for a message to travel through a network, and it is entirely possible for 2 nodes to complete the task and start to broadcast their results to the network at roughly the same time, while one’s messages are received by all other nodes earlier as the node has low latency.
Imagine there is a network latency of 1 minute and a target block time of 2 minutes. A node could solve the task in around 1 minute but his message would take 1 minute to reach the rest of the nodes that are still working on the solution. While his message travels through the network, all the work done by all other nodes during that 1 minute, even if these nodes also complete the task, would go to waste. In this case, 50% of the computational power contributed to the network is wasted.
The percentage of wasted computational power would proportionally decrease if the mining difficulty were higher, as it would statistically take longer for miners to complete the task. In other words, if the mining difficulty, and therefore targeted block time is low, miners with powerful and often centralized mining facilities would get a higher chance of becoming the block producer, while the participation of weaker miners would become in vain. This introduces possible centralization and weakens the overall security of the network.
However, given a limited amount of transactions that can be stored in a block, making the block time too longwould decrease the number of transactions the network can process per second, negatively affecting network scalability.

3. Bitcoin’s additional features

Segregated Witness (SegWit)

Segregated Witness, often abbreviated as SegWit, is a protocol upgrade proposal that went live in August 2017.
SegWit separates witness signatures from transaction-related data. Witness signatures in legacy Bitcoin blocks often take more than 50% of the block size. By removing witness signatures from the transaction block, this protocol upgrade effectively increases the number of transactions that can be stored in a single block, enabling the network to handle more transactions per second. As a result, SegWit increases the scalability of Nakamoto consensus-based blockchain networks like Bitcoin and Litecoin.
SegWit also makes transactions cheaper. Since transaction fees are derived from how much data is being processed by the block producer, the more transactions that can be stored in a 1MB block, the cheaper individual transactions become.
https://preview.redd.it/depya70mf3151.png?width=1601&format=png&auto=webp&s=a6499aa2131fbf347f8ffd812930b2f7d66be48e
The legacy Bitcoin block has a block size limit of 1 megabyte, and any change on the block size would require a network hard-fork. On August 1st 2017, the first hard-fork occurred, leading to the creation of Bitcoin Cash (“BCH”), which introduced an 8 megabyte block size limit.
Conversely, Segregated Witness was a soft-fork: it never changed the transaction block size limit of the network. Instead, it added an extended block with an upper limit of 3 megabytes, which contains solely witness signatures, to the 1 megabyte block that contains only transaction data. This new block type can be processed even by nodes that have not completed the SegWit protocol upgrade.
Furthermore, the separation of witness signatures from transaction data solves the malleability issue with the original Bitcoin protocol. Without Segregated Witness, these signatures could be altered before the block is validated by miners. Indeed, alterations can be done in such a way that if the system does a mathematical check, the signature would still be valid. However, since the values in the signature are changed, the two signatures would create vastly different hash values.
For instance, if a witness signature states “6,” it has a mathematical value of 6, and would create a hash value of 12345. However, if the witness signature were changed to “06”, it would maintain a mathematical value of 6 while creating a (faulty) hash value of 67890.
Since the mathematical values are the same, the altered signature remains a valid signature. This would create a bookkeeping issue, as transactions in Nakamoto consensus-based blockchain networks are documented with these hash values, or transaction IDs. Effectively, one can alter a transaction ID to a new one, and the new ID can still be valid.
This can create many issues, as illustrated in the below example:
  1. Alice sends Bob 1 BTC, and Bob sends Merchant Carol this 1 BTC for some goods.
  2. Bob sends Carols this 1 BTC, while the transaction from Alice to Bob is not yet validated. Carol sees this incoming transaction of 1 BTC to him, and immediately ships goods to B.
  3. At the moment, the transaction from Alice to Bob is still not confirmed by the network, and Bob can change the witness signature, therefore changing this transaction ID from 12345 to 67890.
  4. Now Carol will not receive his 1 BTC, as the network looks for transaction 12345 to ensure that Bob’s wallet balance is valid.
  5. As this particular transaction ID changed from 12345 to 67890, the transaction from Bob to Carol will fail, and Bob will get his goods while still holding his BTC.
With the Segregated Witness upgrade, such instances can not happen again. This is because the witness signatures are moved outside of the transaction block into an extended block, and altering the witness signature won’t affect the transaction ID.
Since the transaction malleability issue is fixed, Segregated Witness also enables the proper functioning of second-layer scalability solutions on the Bitcoin protocol, such as the Lightning Network.

Lightning Network

Lightning Network is a second-layer micropayment solution for scalability.
Specifically, Lightning Network aims to enable near-instant and low-cost payments between merchants and customers that wish to use bitcoins.
Lightning Network was conceptualized in a whitepaper by Joseph Poon and Thaddeus Dryja in 2015. Since then, it has been implemented by multiple companies. The most prominent of them include Blockstream, Lightning Labs, and ACINQ.
A list of curated resources relevant to Lightning Network can be found here.
In the Lightning Network, if a customer wishes to transact with a merchant, both of them need to open a payment channel, which operates off the Bitcoin blockchain (i.e., off-chain vs. on-chain). None of the transaction details from this payment channel are recorded on the blockchain, and only when the channel is closed will the end result of both party’s wallet balances be updated to the blockchain. The blockchain only serves as a settlement layer for Lightning transactions.
Since all transactions done via the payment channel are conducted independently of the Nakamoto consensus, both parties involved in transactions do not need to wait for network confirmation on transactions. Instead, transacting parties would pay transaction fees to Bitcoin miners only when they decide to close the channel.
https://preview.redd.it/cy56icarf3151.png?width=1601&format=png&auto=webp&s=b239a63c6a87ec6cc1b18ce2cbd0355f8831c3a8
One limitation to the Lightning Network is that it requires a person to be online to receive transactions attributing towards him. Another limitation in user experience could be that one needs to lock up some funds every time he wishes to open a payment channel, and is only able to use that fund within the channel.
However, this does not mean he needs to create new channels every time he wishes to transact with a different person on the Lightning Network. If Alice wants to send money to Carol, but they do not have a payment channel open, they can ask Bob, who has payment channels open to both Alice and Carol, to help make that transaction. Alice will be able to send funds to Bob, and Bob to Carol. Hence, the number of “payment hubs” (i.e., Bob in the previous example) correlates with both the convenience and the usability of the Lightning Network for real-world applications.

Schnorr Signature upgrade proposal

Elliptic Curve Digital Signature Algorithm (“ECDSA”) signatures are used to sign transactions on the Bitcoin blockchain.
https://preview.redd.it/hjeqe4l7g3151.png?width=1601&format=png&auto=webp&s=8014fb08fe62ac4d91645499bc0c7e1c04c5d7c4
However, many developers now advocate for replacing ECDSA with Schnorr Signature. Once Schnorr Signatures are implemented, multiple parties can collaborate in producing a signature that is valid for the sum of their public keys.
This would primarily be beneficial for network scalability. When multiple addresses were to conduct transactions to a single address, each transaction would require their own signature. With Schnorr Signature, all these signatures would be combined into one. As a result, the network would be able to store more transactions in a single block.
https://preview.redd.it/axg3wayag3151.png?width=1601&format=png&auto=webp&s=93d958fa6b0e623caa82ca71fe457b4daa88c71e
The reduced size in signatures implies a reduced cost on transaction fees. The group of senders can split the transaction fees for that one group signature, instead of paying for one personal signature individually.
Schnorr Signature also improves network privacy and token fungibility. A third-party observer will not be able to detect if a user is sending a multi-signature transaction, since the signature will be in the same format as a single-signature transaction.

4. Economics and supply distribution

The Bitcoin protocol utilizes the Nakamoto consensus, and nodes validate blocks via Proof-of-Work mining. The bitcoin token was not pre-mined, and has a maximum supply of 21 million. The initial reward for a block was 50 BTC per block. Block mining rewards halve every 210,000 blocks. Since the average time for block production on the blockchain is 10 minutes, it implies that the block reward halving events will approximately take place every 4 years.
As of May 12th 2020, the block mining rewards are 6.25 BTC per block. Transaction fees also represent a minor revenue stream for miners.
submitted by D-platform to u/D-platform [link] [comments]

Bottos 2020 Research and Development Scheme

Bottos 2020 Research and Development Scheme

https://preview.redd.it/umh8ivbsua841.png?width=554&format=png&auto=webp&s=5c16d9d9e61503e4c9d44212eecd176eda11550a
As 2020 is now here, Bottos has solemnly released its “2020 Research and development scheme”. On one hand, we adhere to the principle of transparency so that the whole community can comprehend our next step as a whole, but more importantly, it also helps our whole team to think deeply about the future and reach consensus. It is strongly believed that following these consistent follow-ups will help us to in order to achieve the best results.
Based on the efficient development of Bottos, the team’s technical achievements in consensus algorithms and smart contracts are used to deeply implement and optimize the existing technical architecture. At the same time using the community’s technical capabilities, horizontal development, expanding new functional modules and technical directions it stays closely integrated with the whole community.
In the future, we will keep on striving to achieve in-depth thinking, comprehensive planning, and flexible adjustment.


Overview of Technical Routes

https://preview.redd.it/rk9tpg2uua841.png?width=554&format=png&auto=webp&s=77e607b81f31c0d20feaa90eca81f09a23addca4
User feedback within the community is the driving force behind Bottos progress. In the development route of the community and industry we have formulated a roadmap for technical development, pointing out the right path for the team towards the right direction among the massive routes of modern technology.
As part of our 2020 research and development objective we have the following arrangements:
1. Intensifying enormous number of smart contracts and related infrastructures
After many years of development, smart contracts have gradually become the core and standard function in blockchain projects. The strength of smart contracts, ease of use, and stability represent the key capabilities of a blockchain project. As a good start, Bottos has already made great progress in the field of smart contracts. In smart contracts we still need to increase development efforts, making the ease of use and stability of smart contracts the top priority of our future development.
Reducing the barriers for developers and ordinary users to use, shortening the contract development cycle and saving users time is another important task for the team to accomplish. To this end, we have planned an efficient and easy-to-use one-stop contract development, debugging, and deployment tool that will provide multiple access methods and interfaces to the test network to support rapid deployment and rapid debugging.
2. Establishing an excellent client and user portal
The main goal here is to add an entrance point to the creation and deployment of smart contracts in the wallet client. To this end, the wallet needs to be transformed, a local compiler for smart contracts must be added, and an easy-to-use UI interface can be provided for the purpose of creating, deploying, and managing contracts to meet the needs of users with a single mouse click only.
3. Expanding distributed storage
Distributed storage is another focus of our development in the upcoming year. Only by using a distributed architecture can completely solve the issue of performance and scalability of stand-alone storage. Distributed storage suitable for blockchain needs to provide no less than single machine performance, extremely high availability, no single point of failure, easy expansion, and strong consistent transactions. These are the main key points and difficulties of Bottos in field of distributed storage in the upcoming days.
4. Reinforcing multi party secured computing
Privacy in computing is also a very important branch to deal with. In this research direction, Bottos has invested a lot of time and produced many research results on multi-party secured computing, such as technical articles and test cases. In the future, we will continue to give efforts in the direction of multi-party secured computing and apply mature technology achievements into the functions of the chain.

2020 Bottos — Product Development

Support for smart contract deployment in wallets
The built-in smart contract compiler inside the wallet supports compilation of the smart contracts in all languages provided by Bottos and integrates with the functions in the wallet. It also supports one-click deployment of the compiled contract source code in the wallet.
When compiling a contract, one can choose whether to pre-execute the contract code. If pre-execution is selected, it will connect to the remote contract pre-execution service and return the execution result to the wallet.
When deploying a contract, one can choose to deploy to the test network or main network and the corresponding account and private key of the test network or main network should be provided.

2020 Bottos-Technical Research

https://preview.redd.it/x2k65j7xua841.png?width=553&format=png&auto=webp&s=a40eae3c56b664c031b3db96f608923e670ff331
1. Intelligent smart contract development platform (BISDP)
The smart contract development platform BISDP is mainly composed of user-oriented interfaces, as well as back-end compilation and deployment tools, debugging tools, and pre-execution frameworks.
The user-oriented interface provides access methods based on WEB, PC, and mobile apps, allowing developers to quickly and easily compile and deploy contracts and provide contract template management functions. It can also manage the contract remotely by viewing the contract execution status, the consumed resources and other information.
In the compilation and deployment tool a set of smart contract source code editing, running, debugging, and deployment solutions as well as smart contract templates for common tasks are provided, which greatly reduces the threshold for developers to learn and use smart contracts. At the same time, developers and ordinary users are provided with a smart contract pre-execution framework, which can check the logical defects and security risks in smart contracts before actual deployment and promptly remind users a series of problems even before the smart contracts are actually run.
In the debugging tool, there are built-in local debugging and remote debugging tools. Multiple breakpoints can be set in the debugging tool. When the code reaches the breakpoint, one can view the variables and their contents in the current execution stack. One can also make conditional breakpoints based on the value of the variable. The code will not execute until the value reaches a preset value in memory.
In the pre-execution framework, developers can choose to pre-execute contract code in a virtual environment or a test net, checking out problems in some code that cannot be detected during compilation time and perform deeper code inspection. The pre-execution framework can also prompt the user in advance about the time and space resources required for execution.
2. Supporting Python and PHP in BVM virtual machine for writing smart contracts
We have added smart contract writing tools based on Python and PHP languages. These languages can be compiled into the corresponding BVM instruction set for implementation. These two reasons are used as the programming language for smart contracts.
For the Python language, the basic language elements supported by the first phase are:
- Logic control: If, Else, Eli, While, Break, method calls, for x in y
- Arithmetic and relational operators: ADD, SUB, MUL, DIV, ABS, LSHIFT, RSHIFT, AND, OR, XOR, MODULE, INVERT, GT, GTE, LT, LTE, EQ, NOTEQ
-
Data structure:
- Supports creation, addition, deletion, replacement, and calculation of length of list data structure
- Supports creation, append, delete, replace, and calculation of length of dict data structure
Function: Supports function definition and function calls
For the PHP language, the basic language elements supported by the first phase are :
- Logic control: If, Else, Eli, While, Break, method calls
- Arithmetic and relational operators: ADD, SUB, MUL, DIV, ABS, LSHIFT, RSHIFT, AND, OR, XOR, MODULE, INVERT, GT, GTE, LT, LTE, EQ, NOTEQ
Data structure:
- Support for creating, appending, deleting, replacing, and calculating length of associative arrays
Function: Supports the definition and calling of functions
For these two above mentioned languages, the syntax highlighting and code hinting functions are also provided in BISDP, which is very convenient for developers to debug any errors.
3. Continuous exploration of distributed storage solutions
Distributed storage in blockchain technology actually refers to a distributed database. Compared with the traditional DMBS, in addition to the ACID characteristics of the traditional DBMS, the distributed database also provides the high availability and horizontal expansion of the distributed system. The CAP principle of distributed system reveals that for a common distributed system there is an impossible triangle, only two of them can be selected among its three directions, consistency, availability, and partition fault tolerance. Distributed databases in China must require strong consistency. This is due to the characteristics of the blockchain system itself, because it needs to provide reliable distributed transaction capabilities. For these technical issues, before ensuring that the distributed storage solution reaches 100% availability, we will continue to invest more time and technical strength, do more functional and performance testing, and conduct targeted tests for distributed storage systems.
4. Boosting secured multi-party computing research and development
Secured multi-party Computing (MPC) is a cryptographic mechanism that enables multiple entities to share data while protecting the confidentiality of the data without exposing the secret encryption key. Its performance indicators, such as security and reliability are important for the realization of the blockchain. The transparent sharing of the data privacy on the distributed ledger and the privacy protection of the client wallet’s private key are truly essential.
At present, the research and development status of the platform provided by Bottos in terms of privacy-enhanced secured multi-party computing is based on the BIP32 / 44 standard in Bitcoin wallets to implement distributed management of client wallet keys and privacy protection.
Considering the higher level of data security and the distributed blockchain account as the public data of each node, further research and development are being planned on:
(1) Based on RSA, Pailliar, ECDSA and other public key cryptosystems with homomorphic attributes, as well as the GC protocol, OT protocol, and ZKP protocol to generate and verify transaction signatures between two parties;
(2) Introduce the international mainstream public key system with higher security and performance, national secret public key encryption system, and fewer or non-interactive ZKP protocols to achieve secured multi-party computing with more than two parties, allowing more nodes to participate Privacy protection of ledger data.

Summary

After years of exploration, we are now full of confidence in our current research and development direction. We are totally determined to move forward by continuous hard work. In the end, all members of Bottos also want to thank all the friends in the community for their continuous support and outstanding contributions. Your certainty is our greatest comfort and strongest motivation.

Be smart. Be data-driven. Be Bottos.
If you aren’t already in our group, please join now! https://t.me/bottosofficial
Join Our Community and Stay Updated!
Bottos Website | Twitter |Facebook | Telegram | Reddit
submitted by BOTTOS_AI to Bottos [link] [comments]

Technical: Pay-to-contract and Sign-to-contract

What's this? I don't make a Technical post for a month and now BitPay is censoring the Hong Kong Free Press? Shit I'm sorry, it's all my fault for not posting a Technical post regularly!! Now posting one so that we have a censorship-free Bitcoin universe!
Pay-to-contract and sign-to-contract are actually cryptographic techniques to allow you to embed a commitment in a public key (pay-to-contract) or signature (sign-to-contract). This commitment can be revealed independently of the public key / signature without leaking your private key, and the existence of the commitment does not prevent you from using the public key / signature as a normal pubkey/signature for a normal digital signing algorithm.
Both techniques utilize elliptic curve homomorphism. Let's digress into that a little first.

Elliptic Curve Homomorphism

Let's get an oversimplified view of the maths involved first.
First, we have two "kinds" of things we can compute on.
  1. One kind is "scalars". These are just very large single numbers. Traditionally represented by small letters.
  2. The other kind is "points". These are just pairs of large numbers. Traditionally represented by large letters.
Now, an "Elliptic Curve" is just a special kind of curve with particular mathematical properties. I won't go into those properties, for the very reasonable reason that I don't actually understand them (I'm not a cryptographer, I only play one on reddit!).
If you have an Elliptic Curve, and require that all points you work with are on some Elliptic Curve, then you can do these operations.
  1. Add, subtract, multiply, and divide scalars. Remember, scalars are just very big numbers. So those basic mathematical operations still work on big numbers, they're just big numbers.
  2. "Multiply" a scalar by a point, resulting in a point. This is written as a * B, where a is the scalar and B is a point. This is not just multiplying the scalar to the point coordinates, this is some special Elliptic Curve thing that I don't understand either.
  3. "Add" two points together. This is written as A + B. Again, this is some special Elliptic Curve thing.
The important part is that if you have:
A = a * G B = b * G Q = A + B 
Then:
q = a + b Q = q * G 
That is, if you add together two points that were each derived from multiplying an arbitarry scalar with the same point (G in the above), you get the same result as adding the scalars together first, then multiplying their sum with the same point will yield the same number. Or:
a * G + b * G = (a + b) * G 
And because multiplication is just repeated addition, the same concept applies when multiplying:
a * (b * G) = (a * b) * G = (b * a) * G = b * (a * G) 
Something to note in particular is that there are few operations on points. One operation that's missing is "dividing" a point by a point to yield a scalar. That is, if you have:
A = a * G 
Then, if you know A but don't know the scalar a, you can't do the below:
a = A / G 
You can't get a even if you know both the points A and G.
In Elliptic Curve Cryptography, scalars are used as private keys, while points are used as public keys. This is particularly useful since if you have a private key (scalar), you can derive a public key (point) from it (by multiplying the scalar with a certain standard point, which we call the "generator point", traditionally G). But there is no reverse operation to get the private key from the public key.

Commitments

Let's have another mild digression.
Sometimes, you want to "commit' to something that you want to keep hidden for now. This is actually important in some games and so on. For example, if you are paying a game of Twenty Questions, one player must first write the object they are thinking of, then fold or hide it in such a way that what they wrote is not visible. Then, after the guessing player has asked twenty questions to narrow down what the object is and has revealed what he or she thinks the object being guessed was, the guessee reveals the object by unfodling and showing the paper.
The act of writing down commits you to the specific thing you wrote down. Folding the paper and/or hiding it, err, hides what you wrote down. Later, when you unfold the paper, you reveal your commitment.
The above is the analogy to the development of cryptographic commitments.
  1. First you select some thing --- it could be anything, a song, a random number, a promise to deliver products and services, the real identity of Satoshi Nakamoto.
  2. You commit to it by giving it as input to a one-way function. A one-way function is a function which allows you to get an output from an input, but after you perform that there is no way to reverse it and determine the original input knowing only the final output. Hash functions like SHA are traditionally used as one-way functions. As a one-way function, this hides your original input.
  3. You give the commitment (the output of the one-way function given your original input) to whoever wants you to commit.
  4. Later, when somebody demands to show what you committed to (for example after playing Twenty Questions), you reveal the commitment by giving the original input to the one-way function (i.e. the thing you selected in the first step, which was the thing you wanted to commit to).
  5. Whoever challenged you can verify your commitment by feeding your supposed original input to the same one-way function. If you honestly gave the correct input, then the challenger will get the output that you published above in step 3.

Salting

Now, sometimes there are only a few possible things you can select from. For example, instead of Twenty Questions you might be playing a Coin Toss Guess game.
What we'd do would be that, for example, I am the guesser and you the guessee. You select either "heads" or "tails" and put it in a commitment which you hand over to me. Then, I say "heads" or "tails" and have you reveal your commitment. If I guessed correctly I win, if not you win.
Unfortunately, if we were to just use a one-way function like an SHA hash function, it would be very trivial for me to win. All I would need to do would be to try passing "heads" and "tails" to the one-way function and see which one matches the commitment you gave me. Then I can very easily find out what your committed value was, winning the game consistently. In hacking, this can be made easier by making Rainbow Tables, and is precisely the technique used to derive passwords from password databases containing hashes of the passwords.
The way to solve this is to add a salt. This is basically just a large random number that we prepend (or append, order doesn't matter) to the actual value you want to commit to. This means that not only do I have to feed "heads" or "tails", I also have to guess the large random number (the salt). If the possible space of large random numbers is large enough, this prevents me from being able to peek at your committed data. The salt is sometimes called a blinding factor.

Pay-to-contract

Hiding commitments in pubkeys!
Pay-to-contract allows you to publish a public key, whose private key you can derive, while also being a cryptographic commitment. In particular, your private key is also used to derive a salt.
The key insight here is to realize that "one-way function" is not restricted to hash functions like SHA. The operation below is an example of a one-way function too:
h(a) = a * G 
This results in a point, but once the point (the output) is known, it is not possible to derive the input (the scalar a above). This is of course restricted to having the input be a scalar only, instead of an arbitrary-length message, but you can add a hash function (which can accept an arbitrary-length input) and then make its output (a fixed-length scalar) as the scalar to use.
First, pay-to-contract requires you to have a public and private keypair.
; p is private key P = p * G ; P is now public key 
Then, you have to select a contract. This is just any arbitrary message containing any arbitrary thing (it could be an object for Twenty Questions, or "heads" or "tails" for Coin Toss Guessing). Traditionally, this is symbolized as the small letter s.
In order to have a pay-to-contract public key, you need to compute the below from your public key P (called the internal public key; by analogy the private key p is the internal private key):
Q = P + h(P | s) * G 
"h()" is any convenient hash function, which takes anything of arbitrary length, and outputs a scalar, which you can multiply by G. The syntax "P | s" simply means that you are prepending the point P to the contract s.
The cute thing is that P serves as your salt. Any private key is just an arbitrary random scalar. Multiplying the private key by the generator results in an arbitrary-seeming point. That random point is now your salt, which makes this into a genuine bonafide hiding cryptographic commitment!
Now Q is a point, i.e. a public key. You might be interested in knowing its private key, a scalar. Suppose you postulate the existence of a scalar q such that:
 Q = q * G 
Then you can do the below:
 Q = P + h(P | s) * G Q = p * G + h(P | s) * G Q = (p + h(P | s)) * G 
Then we can conclude that:
 q = p + h(P | s) 
Of note is that somebody else cannot learn the private key q unless they already know the private key p. Knowing the internal public key P is not enough to learn the private key q. Thus, as long as you are the only one who knows the internal private key p, and you keep it secret, then only you can learn the private key q that can be used to sign with the public key Q (that is also a pay-to-contract commitment).
Now Q is supposed to be a commitment, and once somebody else knows Q, they can challenge you to reveal your committed value, the contract s. Revealing the pay-to-contract commitment is done by simply giving the internal public key P (which doubles as the salt) and the committed value contract s.
The challenger then simply computes:
P + h(P | s) * G 
And verifies that it matches the Q you gave before.
Some very important properties are:
  1. If you reveal first, then you still remain in sole control of the private key. This is because revelation only shows the internal public key and the contract, neither of which can be used to learn the internal private key. So you can reveal and sign in any order you want, without precluding the possibility of performing the other operation in the future.
  2. If you sign with the public key Q first, then you do not need to reveal the internal public key P or the contract s. You can compute q simply from the internal private key p and the contract s. You don't even need to pass those in to your signing algorithm, it could just be given the computed q and the message you want to sign!
  3. Anyone verifying your signature using the public key Q is unaware that it is also used as a cryptographic commitment.
Another property is going to blow your mind:
  1. You don't have to know the internal private key p in order to create a commitment pay-to-contract public key Q that commits to a contract s you select.
Remember:
Q = P + h(P | s) * G 
The above equation for Q does not require that you know the internal private key p. All you need to know is the internal public key P. Since public keys are often revealed publicly, you can use somebody else's public key as the internal public key in a pay-to-contract construction.
Of course, you can't sign for Q (you need to know p to compute the private key q) but this is sometimes an interesting use.
The original proposal for pay-to-contract was that a merchant would publish their public key, then a customer would "order" by writing the contract s with what they wanted to buy. Then, the customer would generate the public key Q (committing to s) using the merchant's public key as the internal public key P, then use that in a P2PKH or P2WPKH. Then the customer would reveal the contract s to the merchant, placing their order, and the merchant would now be able to claim the money.
Another general use for pay-to-contract include publishing a commitment on the blockchain without using an OP_RETURN output. Instead, you just move some of your funds to yourself, using your own public key as the internal public key, then selecting a contract s that commits or indicates what you want to anchor onchain. This should be the preferred technique rather than OP_RETURN. For example, colored coin implementations over Bitcoin usually used OP_RETURN, but the new RGB colored coin technique uses pay-to-contract instead, reducing onchain bloat.

Taproot

Pay-to-contract is also used in the nice new Taproot concept.
Briefly, taproot anchors a Merkle tree of scripts. The root of this tree is the contract s committed to. Then, you pay to a SegWit v1 public key, where the public key is the Q pay-to-contract commitment.
When spending a coin paying to a SegWit v1 output with a Taprooted commitment to a set of scripts s, you can do one of two things:
  1. Sign directly with the key. If you used Taproot, use the commitment private key q.
  2. Reveal the commitment, then select the script you want to execute in the Merkle tree of scripts (prove the Markle tree path to the script). Then satisfy the conditions of the script.
Taproot utilizes the characteristics of pay-to-contract:
  1. If you reveal first, then you still remain in sole control of the private key.
    • This is important if you take the Taproot path and reveal the commitment to the set of scripts s. If your transaction gets stalled on the mempool, others can know your commitment details. However, revealing the commitment will not reveal the internal private key p (which is needed to derive the commitment private key q), so nobody can RBF out your transaction by using the sign-directly path.
  2. If you sign with the public key Q first, then you do not need to reveal the internal public key P or the contract s.
    • This is important for privacy. If you are able to sign with the commitment public key, then that automatically hides the fact that you could have used an alternate script s instead of the key Q.
  3. Anyone verifying your signature using the public key Q is unaware that it is also used as a cryptographic commitment.
    • Again, privacy. Fullnodes will not know that you had the ability to use an alternate script path.
Taproot is intended to be deployed with the switch to Schnorr-based signatures in SegWit v1. In particular, Schnorr-based signatures have the following ability that ECDSA cannot do except with much more difficulty:
As public keys can, with Schnorr-based signatures, easily represent an n-of-n signing set, the internal public key P can also actually be a MuSig n-of-n signing set. This allows for a number of interesting protocols, which have a "good path" that will be private if that is taken, but still have fallbacks to ensure proper execution of the protocol and prevent attempts at subverting the protocol.

Escrow Under Taproot

Traditionally, escrow is done with a 2-of-3 multisignature script.
However, by use of Taproot and pay-to-contract, it's possible to get more privacy than traditional escrow services.
Suppose we have a buyer, a seller, and an escrow service. They have keypairs B = b * G, S = s * G, and E = e * G.
The buyer and seller then generate a Taproot output (which the buyer will pay to before the seller sends the product).
The Taproot itself uses an internal public key that is the 2-of-2 MuSig of B and S, i.e. MuSig(B, S). Then it commits to a pair of possible scripts:
  1. Release to a 2-of-2 MuSig of seller and escrow. This path is the "escrow sides with seller" path.
  2. Release to a 2-of-2 MuSig of buyer and escrow. This path is the "escrow sides with buyer" path.
Now of course, the escrow also needs to learn what the transaction was supposed to be about. So what we do is that the escrow key is actually used as the internal public key of another pay-to-contract, this time with the script s containing the details of the transaction. For example, if the buyer wants to buy some USD, the contract could be "Purchase of 50 pieces of United States Federal Reserve Green Historical Commemoration papers for 0.357 satoshis".
This takes advantage of the fact that the committer need not know the private key behind the public key being used in a pay-to-contract commitment. The actual transaction it is being used for is committed to onchain, because the public key published on the blockchain ultimately commits (via a taproot to a merkle tree to a script containing a MuSig of a public key modified with the committed contract) to the contract between the buyer and seller.
Thus, the cases are:
  1. Buyer and seller are satisfied, and cooperatively create a signature that spends the output to the seller.
    • The escrow service never learns it could have been an escrow. The details of their transaction remain hidden and private, so the buyer is never embarrassed over being so tacky as to waste their hard money buying USD.
  2. The buyer and seller disagree (the buyer denies having received the goods in proper quality).
    • They contact the escrow, and reveal the existence of the onchain contract, and provide the data needed to validate just what, exactly, the transaction was supposed to be about. This includes revealing the "Purchase of 50 pieces of United States Federal Reserve Green Historical Commemoration papers for 0.357 satoshis", as well as all the data needed to validate up to that level. The escrow then investigates the situation and then decides in favor of one or the other. It signs whatever transaction it decides (either giving it to the seller or buyer), and possibly also extracts an escrow fee.

Smart Contracts Unchained

Developed by ZmnSCPxj here: https://zmnscpxj.github.io/bitcoin/unchained.html
A logical extension of the above escrow case is to realize that the "contract" being given to the escrow service is simply some text that is interpreted by the escrow, and which is then executed by the escrow to determine where the funds should go.
Now, the language given in the previous escrow example is English. But nothing prevents the contract from being written in another language, including a machine-interpretable one.
Smart Contracts Unchained simply makes the escrow service an interpreter for some Smart Contract scripting language.
The cute thing is that there still remains an "everything good" path where the participants in the smart contract all agree on what the result is. In that case, with Taproot, there is no need to publish the smart contract --- only the participants know, and nobody else has to. This is an improvement in not only privacy, but also blockchain size --- the smart contract itself never has to be published onchain, only the commitment to it is (and that is embedded in a public key, which is necessary for basic security on the blockchain anyway!).

Sign-to-contract

Hiding commitments in signatures!
Sign-to-contract is something like the dual or inverse of pay-to-contract. Instead of hiding a commitment in the public key, it is hidden in the signature.
Sign-to-contract utilizes the fact that signatures need to have a random scalar r which is then published as the point R = r * G.
Similarly to pay-to-contract, we can have an internal random scalar p and internal point P that is used to compute R:
R = P + h(P | s) * G 
The corresponding random scalar r is:
r = p + h(P | s) 
The signing algorithm then uses the modified scalar r.
This is in fact just the same method of commitment as in pay-to-contract. The operations of committing and revealing are the same. The only difference is where the commitment is stored.
Importantly, however, is that you cannot take somebody else's signature and then create an alternate signature that commits to some s you select. This is in contrast with pay-to-contract, where you can take somebody else's public key and then create an alternate public key that commits to some s you select.
Sign-to-contract is somewhat newer as a concept than pay-to-contract. It seems there are not as many applications of pay-to-contract yet.

Uses

Sign-to-contract can be used, like pay-to-contract, to publish commitments onchain.
The difference is below:
  1. Signatures are attached to transaction inputs.
  2. Public keys are attached to transaction outputs.
One possible use is in a competitor to Open Timestamps. Open Timestamps currently uses OP_RETURN to commit to a Merkle Tree root of commitments aggregated by an Open Timestamps server.
Instead of using such an OP_RETURN, individual wallets can publish a timestamped commitment by making a self-paying transaction, embedding the commitment inside the signature for that transaction. Such a feature can be added to any individual wallet software. https://blog.eternitywall.com/2018/04/13/sign-to-contract/
This does not require any additional infrastructure (i.e. no aggregating servers like in Open Timestamps).

R Reuse Concerns

ECDSA and Schnorr-based signature schemes are vulnerable to something called "R reuse".
Basically, if the same R is used for different messages (transactions) with the same public key, a third party with both signatures can compute the private key.
This is concerning especially if the signing algorithm is executed in an environment with insufficient entropy. By complete accident, the environment might yield the same random scalar r in two different runs. Combined with address reuse (which implies public key reuse) this can leak the private key inadvertently.
For example, most hardware wallets will not have any kind of entropy at all.
The usual solution to this is, instead of selecting an arbitrary random r (which might be impossible in limited environments with no available entropy), is to hash the message and use the hash as the r.
This ensures that if the same public key is used again for a different message, then the random r is also different, preventing reuse at all.
Of course, if you are using sign-to-contract, then you can't use the above "best practice".
It seems to me plausible that computing the internal random scalar p using the hash of the message (transaction) should work, then add the commitment on top of that. However, I'm not an actual cryptographer, I just play one on Reddit. Maybe apoelstra or pwuille can explain in more detail.
Copyright 2019 Alan Manuel K. Gloria. Released under CC-BY.
submitted by almkglor to Bitcoin [link] [comments]

Weekly Dev Update #17

THORChain Weekly Dev Update for Week 12–18 Nov 2019

![](https://miro.medium.com/max/2880/1*Fy-NCZAKhgbZmE-iEIQChQ.png)

Recent Changes

Some recent updates to the protocol:

Update to Emission

The first iteration of the block reward scheme was announced in the previous weekly update. An immediate concern raised from the community was that the emission was too aggressive in the initial year and rewards dropped off fast beyond the 5 year mark. Taking Bitcoin’s emission as an example, the emission curve has been updated to target 2% emission after 10 years.
![](https://miro.medium.com/max/2384/1*gqBLvJOl2G4n3IHW1rViKg.png)
The Block Reward equation is given by the following recurrence equation: g(n+2) = ((R - (g(n+1) + g(n))) / x) / y Which evaluates to: ![](https://miro.medium.com/max/1624/1*ttpsRd7HUs2-7hvDGO6elg.png) where: R = Reserve, x = 6 (Arbitrary Emission Factor) y = (seconds per day / seconds per block) / days per year y = (86400 / 5) * 365.2425 The final curve thus has a Day 0 emission of 25%, Year 1 emission of 20% and Year 10 emission of 2%.

ChaosNet

The original plan for BEPSwap (prior to the Yggdrasil liquidity breakthrough) was to have it as a separate mainnet before launching the real THORChain in 2020 with cross-chain support. Now THORChain has in-built cross-chain support and a clear roadmap to 99 nodes. This means the mainnet launch will have public, community-run nodes at the start. The community has been fielding many questions about how to run a node, and the mechanics in doing so. Since the THORChain team will not be running any nodes, it is necessary to have a full-rehearsal with the community at launch. As such, the plan is for a public ChaosNet on 03 January 2020. ChaosNet will have the following key differences: * Minimum bond of 100k RUNE. * Maximum of 12 Nodes. * Churn cycle of 1 day. * Maximum stake amount of 600k RUNE total. * 2.7m RUNE Protocol Reserve to emit Bond and Stake rewards. * Hard-coded Ragnorök at 6 weeks.
Any member who wishes to join ChaosNet to get accustomed to running a node can do so, and will receive Block Rewards roughly equivalent to mainnet (25%). They will be setting up nodes, churning in, servicing the network and earning rewards. The system will hold up to 600k Rune, at which point it will refund any additional staked amount. The community can stake small amounts of real assets, prepare arbitrage bots, set up telegram alert bots and more. In short, it is a public rehearsal with the entire community across all facets (nodes, stakers, traders) so that everyone will have access to the same information and not unfairly benefit when the real mainnet launches. Additionally, the system will be hard-coded to perform a Ragnorök 6 weeks later, which will refund all the remaining reserve as well as bonded and staked assets. This will go a long way in re-assuring the community that the system can tolerate all levels of risk, including black-swan events, and that funds are safe at all times.

Internal Arbitrage

A new feature will be launched that will allow users to use internal arbitrage. This is an asymmetrical withdrawal to Rune, then immediately followed by a asymmetrical stake of Rune in another pool. A trader may want to do this instead of doing transactional arbitrage in order to exploit price differences between two pools the fastest way possible. Instead of an outgoing transaction being processed, followed by another incoming transaction, Rune balances and stakeUnits are swapped internally, being completed inside of a few seconds.

Fee-based Transaction Prioritisation

Currently there is no prioritisation to the order of transactions, all transactions are simply processed in order of time received. In moments of high demand of network resources (such as when there are large arbitrage opportunities and users are racing to exploit them), transactions will queue in the mempool. If the system cannot respond fast enough, then the reason for high demand will persist (the large arbitrage opportunity). The solution is to remove the reason for high demand in the first place, which is the large arbitrage opportunity, at the same time as collecting the maximum revenue for the system. As such, in the checkTx method (which can triage the mempool), transactions will be sorted and ordered in the value of the fee of the swap transaction. Assuming rational actors, the following transactions will then be prioritised over all others: * A transaction from an impatient swapper who is willing to pay a large fee. * A transaction from a trader who is able to arbitrage out a price discrepancy (and still make a gain).
This then means the system can collect as much income as possible (good for the stakers) at the same time as prioritising transactions that can arbitrage out large price discrepancies quickly. This then means swaps from transient swappers will experience a market price that accurately matches the reference price at all times.

BEPSwap Development

The team are working on 4 parallel streams of effort. Cross-chain infrastructure has now been merged into a single repo called “THORNode”. * THORChain * Midgard Public API * Threshold Signature Scheme implementation * Front-end Integration for BEPSwap

THORChain

Bug fixes, refactoring, as well as more logic around Yggdrasil funding. Additionally, node churn and the first part of block rewards PR was merged. * Add admin config event, fix tx out events https://gitlab.com/thorchain/bepswap/thornode/merge_requests/255 * Resolve “Select a satellite pool to swap out” https://gitlab.com/thorchain/bepswap/thornode/merge_requests/253 * Include the thorcli volume for the signer. https://gitlab.com/thorchain/bepswap/thornode/merge_requests/261 * Rune Reserves, block rewards, bond units, oh my! https://gitlab.com/thorchain/bepswap/thornode/merge_requests/258 * Add mechanism to slash a node account bond or rewards https://gitlab.com/thorchain/bepswap/thornode/merge_requests/264 * Add add event https://gitlab.com/thorchain/bepswap/thornode/merge_requests/262 * Issue198 node churn https://gitlab.com/thorchain/bepswap/thornode/merge_requests/270 * Issue199 — fix signer doesn’t process multiple txout item https://gitlab.com/thorchain/bepswap/thornode/merge_requests/271 * issue194: only rune get refund for invalid memo https://gitlab.com/thorchain/bepswap/thornode/merge_requests/272 * Outbound — mark txout item out hash based on the coin as well https://gitlab.com/thorchain/bepswap/thornode/merge_requests/273

Midgard Public API

Database ported from influxdb to timescaledb (more maturity, better developer tooling). Endpoints built out include/pools and /stakers. * Feature/new endpoint format, refactors and general clean ups
The OpenApi Schema can be reviewed here:
https://testnet-api.bepswap.net/v1/doc

Threshold Signature Scheme

TSS was successfully implemented into the Genesis ceremony, with the focus now being on the key-gen and key-sign ceremonies. Multi-cast DNS was switched out for a distributed hash table to facilitate node discovery. * Issue4 — docker images and ci https://gitlab.com/thorchain/tss/multi-party-ecdsa-dockemerge_requests/5 * Fix a docker bug https://gitlab.com/thorchain/tss/multi-party-ecdsa-dockemerge_requests/6
A proof-of-concept is being prepared using BinanceChain TSS library, which was recently launched in order to make a decision whether to switch libraries. A go-based implementation is better for THORNode, since it is also written in Go.
https://github.com/binance-chain/tss-lib

Frontend Implementation

Bug-fixes and tweaks from community feedback. The frontend is now ready for implementation with the latest Midgard API. * Resolve “Write cypress e2e test for pool stake list view” https://gitlab.com/thorchain/bepswap/bepswap-react-app/merge_requests/164 * Resolve “Update rune token icon” https://gitlab.com/thorchain/bepswap/bepswap-react-app/merge_requests/165 * Resolve “Update confirmation modal” https://gitlab.com/thorchain/bepswap/bepswap-react-app/merge_requests/166 * Resolve “Update wallet view” https://gitlab.com/thorchain/bepswap/bepswap-react-app/merge_requests/167 * Resolve “Add tooltip for wallet connection” https://gitlab.com/thorchain/bepswap/bepswap-react-app/merge_requests/168

Timelines

The team are working for these milestones: * Feature Freeze: 20 November 2019 on-time * Audit: 20 December 2019 on-time * ChaosNet: 03 January 2020 on-time

Community

To keep up to date, please monitor community channels, particularly Telegram and Twitter: Twitter: https://twitter.com/thorchain_org Telegram Community: https://t.me/thorchain_org Telegram Announcements: https://t.me/thorchain Reddit: https://reddit.com/thorchain Github: https://github.com/thorchain Medium: https://medium.com/thorchain
submitted by thorchain_org to THORChain [link] [comments]

Keychain Accelerates Enterprise Blockchain Adoption with Bitcoin Data Security and Identity Layer

FYI
http://www.keychain.io/2019/09/04/1685/
Uses the Bitcoin blockchain as a public key infrastructure to secure off-chain data.
Capabilities:
Features:
Targeted sectors:
submitted by recursivesalt to u/recursivesalt [link] [comments]

Ardor Improvement Proposal (AIP001?) - Adding Support for the "ed25591" Digital Signature Algorithm

This post is inspired by some of the ideas in this thread - https://www.reddit.com/Ardocomments/7qane0/confusion_and_inconsistent_instructions_about/. Is there a way to improve the Addresses and Public Keys in Ardor? Yes, there are a few ways... but a really good option might be to smuggle in some extra features too. Like killing two birds with one stone.
Basically, since Bitcoin and NXT launched there has been a lot of very impressive work done on Digital Signatures Algorithms and specifically in relation to Elliptical Curves. A lot of this work has even been motivated by the cryptocurrency space. Ardor has an opportunity to benefit from some of this innovation by making some simple but clever modifications to it's code.
Different cryptocurrencies settled on competing standards for their Digital Signature Algorithms during the design phase. For example, Bitcoin uses an elliptical curve called Secp2561 with a Digital Signature Algorithm called ECDSA. Ardor uses an elliptical curve called Curve25519 with a Digital Signature Algorithm called EC-KCDSA. There are various reasons why these choices were selected. The history might be of interest to some of you, but I won't go into that now. The elliptic curves are constants so the progress over time is on the Digital Signature Algorithms. Curiously some of the most impressive progress has been made on a DSA that neither Bitcoin nor Ardor use, called ed25591. But at least ed25591 builds on the same curve that Ardor already uses - Curve25519.
So what is so great about ed25591? Well, it's new. You might say that's not necessarily a good thing. We want to see stability with a component like this. Well, it's seeing some rapid adoption because of it's awesome new features. See here for adoption insight - https://ianix.com/pub/ed25519-deployment.html. I.e. it's quickly becoming a standard. So with out further ado - what makes ed25591 really cool:
Some of the really technical benefits can be seen on the projects' homepage here - https://ed25519.cr.yp.to/. Oh and it's completely unencumbered by patents or licenses. And the reference code is "public domain".
I probably haven't done ed25591 enough justice. It's really great and you can read more about it on the web. But what's this about killing two birds with one stone? Well, by making a reasonably big modification to the Ardor software like this, you could also take the opportunity to fix the Ardor Address Problem. Currently the address derivation scheme prioritizes short human-readable addresses at the expense of security. You can win back that lost security by making an outgoing transaction from your Ardor Address/Account (or making sure the incoming transaction broadcasts the recipients public key). But even that's not ideal. If anyone followed the development of Bitcoin they'll know that moving from Pay-to-Pub-Key to P2PKH was a big step up. But with Ardor it seems like you have to regress to increase your account security, even though you've lost the security benefit of the extra Address being derived (hashing) from the public-key step.
So, while making the change of adding support for ed25519, that would be the perfect time to update the Ardor Address format (make the addresses longer to remove the collision problem with the short addresses). The market seems to have decided that long difficult-to-read addresses are not a problem. Wallets and the infrastructure around them has meant that people don't often have to resort to typing out these addresses anyway. If we do this right we can have stronger cold-storage than Bitcoin does and a more flexible DSA than Bitcoin does. Not to mention a more scalable Blockchain, faster block-times, a decentralized exchange, increased transaction capacity, less wasteful consensus algorithm... etc. etc. =D
Let me know what you, the community, think. I'm happy to take any questions.
submitted by ardorer to Ardor [link] [comments]

Nice Article About How HPB Perform Vs EOS (and so ETH)

HPB: Unique Blockchain Infrastructure
Now most public chains will mention that the problem of tps development is the problem of the blockchain. This is also because the traditional blockchain has the problem of poor performance. In order to reach consensus, the efficiency is sacrificed. But if you want to build an ecosystem of countless DAPPs based on the public chain, there is no guarantee of performance that is almost impossible.
The dream of building a DAPP ecosystem is that Bitcoin has not been completed and it is not necessary to complete it. Bitcoin is only a digital currency and it has initially fulfilled its historical mission. It has become a value storer, and it has opened the world of the blockchain. .
Ethereum started with the goal of building a world-wide computer that provided the infrastructure for building decentralized applications, but so far it has only succeeded in the crowdfunding field. Due to performance, cost, scalability, and other issues, it is not yet possible to become a DAPP infrastructure. By the end of 2017, a simple encrypted cat game would have caused Ethereum to jam. Ethereum tried to get rid of the predicament through techniques such as fragmentation, Plasma, and PoS consensus.
Newcomers, such as EOS, are highlighting their high performance, emphasizing the possibility of reaching mega-level tps. Then, in the future, an infrastructure is needed to build a prosperous DAPP ecosystem on this decentralized infrastructure to meet the user or business needs of different scenarios.
What kind of program is a better choice? This is what blue fox has been paying attention to. Blue Fox focuses on an HPB blockchain project that uses a completely different search path than other public chains or infrastructure. This path is worth paying attention to all the buddies who pay attention to the blockchain.
This path is a combination of hardware and software. It is more demanding and the practice is more difficult. However, if it is truly grounded, it may be a good path.
HPB to become a high-performance blockchain infrastructure
Whether HPB or EOS have the same goals, they must provide a high-performance infrastructure for the decentralized ecosystem. why? Mainly from the blockchain to the mainstream business scene point of view. The current blockchain has achieved some success in security and decentralization, but there are natural constraints in terms of efficiency. This hinders its application scenario to the mainstream.
This is also a direction that Blockchain 3.0 has been exploring. Through higher performance, lower costs, and better scalability to meet the needs of more decentralized application scenarios.
The current bitcoin and Ethereum's throughput are both worrying. Bitcoin supports about 7 transactions per second on average, and Ethereum has about 15 throughputs. If you make the block bigger, you can also increase the throughput, but it will cause the problem of block bloat. Last year, an encrypted cat game made everyone see the blockchain congestion problem. From a performance point of view, it takes a long time for blockchains to reach the mainstream.
In addition to the lack of tps performance, the transaction cost of the blockchain is high. Both ordinary users and developers cannot afford gas costs that are too high. For example, before Ethereum's crypto-games became hot, there were even transaction fees compared to encrypted cats. It is also expensive.
The HPB and EOS goals are similar, but their paths are completely different. HPB uses a combination of hardware and software, has its own dedicated chip hardware server, which makes it theoretically have higher performance.
HPB is also trying to create an operating system architecture that can build applications. This architecture includes accounts, identity and authorization management, policy management, databases, asynchronous communications, program scheduling on CPUs, FPGAs, or clusters, and hardware accelerated technology. Realizes low delay and high concurrency and realizes mega-level tps to meet the needs of commercial scenarios.
It is different from EOS. Its architecture, in addition to its software architecture and its hardware architecture, is a combination of hardware and software blockchain architecture that combines high-performance computing and cloud computing concepts. The hardware system includes a distributed core node composed of high performance computing hardware, a general communication network, and a cloud terminal supported by high performance computing hardware.
The core node supports a standard blockchain software architecture, including consensus algorithms, network communications, and task processing. It also introduces a hardware acceleration engine. It works with software to achieve high-performance tps through BOE technology (Blockchain Offload Engine) and consensus algorithm acceleration, data compression, and data encryption.
BOE makes HPB unique
In the HPB's overall architecture, compared with other blockchain infrastructures, there are obvious differences. One of the important points is its BOE technology.
BOE mentioned above, is the blockchain offload engine. The BOE engine includes BOE hardware, BOE firmware, and matching software systems. It is a heterogeneous processing system that achieves high performance and high concurrent computational acceleration by combining CPU serial capabilities with the parallel processing capabilities of the FPGA/ASIC chip.
In the process of parsing TCP packets and UDP packets, the BOE module does not need to participate in the CPU, which can save CPU resources. The BOE module performs integrity checking, signature verification, and account balance verification on received messages such as transactions and blocks, performs fragment processing on large data to be transmitted, and encapsulates the fragments to ensure the integrity of received data. At the same time, statistics work will be performed according to the received traffic of the TCP connection, and corresponding incentives will be provided according to the system contribution.
BOE has played its own role in signature verification speed, encryption channel security, data transmission speed, network performance, and concurrent connections.
The BOE acceleration engine embeds the ECDSA module. The main purpose of this module is to improve the speed of signature verification. ECDSA is also an elliptic curve digital signature algorithm. Although it is a mature algorithm that is widely used at present, the pure software method can only be performed thousands of times per second and cannot meet the high performance requirements. So the combination of BOE and ECDSA is a good attempt.
In the process of data transmission between different nodes, BOE needs to establish an encrypted channel. In this process, it uses a hardware random number generator to implement the security of the encrypted channel, because the seed of the random number of the key exchange becomes unpredictable.
The BOE acceleration engine also uses block data fragmentation broadcasting technology. Block fragmentation includes a complete block header, which facilitates the broadcast of newly generated blocks to all nodes. With block data fragmentation, network data can be quickly transmitted between different nodes.
The BOE technology can perform traffic statistics of node connections based on hardware, and can calculate network bandwidth data provided by different nodes. Only providing network bandwidth to the system will have the opportunity to become a high contribution value node. In this way, incentives for the contribution of the nodes are provided.
In terms of concurrency, BOE is expected to maintain more than 10,000 TCP sessions and handle 10,000 concurrent sessions through an acceleration engine. BOE's dedicated parallel processing hardware replaces the traditional software serial processing functions such as transaction data broadcasting, unverified blockwide network broadcasting, transaction confirmation broadcasting, and the like.
According to HPB estimates, through the BOE acceleration engine, the session response speed and the number of session maintenance can reach more than 100 times the processing power of the common computing platform node. If the actual environment can be achieved, it is a very significant performance improvement.
Consensus algorithm for internal and external bi-level elections
HPB not only significantly improves performance through BOE, but also adopts a dual-layer internal and external voting mechanism in consensus algorithms. It attempts to achieve more efficient consensus efficiency on the premise of ensuring security and privacy.
Outer election refers to the selection of high-contribution-value node members from many candidate nodes, and the election will use node contribution value evaluation indicators. Inner-layer election refers to an anonymous voting mechanism based on a hash queue. When a block is generated, it calculates which high-contribution value node preferentially generates a block. Nodes with high priority have the right to generate blocks preferentially.
So, how to choose high contribution value node? Here is the first indicator to evaluate the contribution value. The indicators include whether a BOE acceleration engine is configured, network bandwidth contribution (data throughput over a fixed period of time), reputation, and total node token holding time. Among them, the creditworthiness of the node is obtained through the analysis of participating transactions and data analysis such as packaged blocks and transaction forwarding. The total holding time of the node token can be obtained by real-time statistics on the account information.
The outer election adopts an adaptive and consistent election plan. That is, by maintaining the consistency of “books” to ensure the consistency of outer elections, this can reduce network synchronization, and can also use the data of each node on the chain. The first is to put the above-mentioned four evaluation indicators into the block. By keeping the account books consistent, you can calculate the current ranking of all the participating candidate nodes. The higher-ranking high-contribution value nodes will become the official high contribution in the next round. Value node.
With the formal high contribution value node, the goal of the inner election is to find the high contribution value node corresponding to each block as soon as possible. The entire process is divided into three phases: nominations, statistics, and calculations. These three phases combine security, privacy, and performance.
The first is the nomination. At the beginning of the voting period, the BOE acceleration engine generates a random Commit. The high contribution value node submits its Commit, and the Commit synchronizes with the chain generated by the high-performance node. After the voting period is over, the Commit in the blockchain is started and the ticket pool is created. The last is the calculation. The calculation is mainly based on the weight algorithm to calculate the node's generation priority in the block. Generate the highest-priority high-contribution value node and obtain the block package right.
Other nodes can verify the random number and address signature according to the principle of verifiable random function, which not only guarantees security, but also guarantees the unpredictability and privacy of high contribution value nodes.
In general, HPB's consensus algorithm combines security, privacy, and speed through a combination of hardware and software. Using the BOE acceleration engine to generate random numbers, contribution value evaluation indicators, coherence ledgers, anonymous voting mechanisms, weight algorithms, signature verification, etc., privacy, reliability, security, and high efficiency are achieved.
Universal virtual machine design: support for different blockchains
The HPB virtual machine adopts a plug-in design mechanism and can support multiple virtual machines. It can implement the combination of the underlying virtual machine and upper level program language translation and support, and support the basic application of virtual machines. In addition, the external interface of the virtual machine can be realized through customized API operations, which can interact with the account data and external data.
The advantage of this mechanism is that it can realize the high performance of native code execution when the smart contract runs, and it can also implement the common virtual machine mechanism supporting different blockchains. For example, it can support Ethereum virtual machine EVM. The smart contract on EVM can also be used on HPB.
Neo's virtual machine NeoVM can also be used on HPB. When high-performance scenarios are needed, users of both EVM and NeoVM need only a few adaptations to interact with other HPB applications.
The HPB smart contract has also made some improvements, such as the management of the life cycle, auditing and forming a common template. No progress can realize the full lifecycle management of smart contracts, such as the complete and controllable process management and integration rights management mechanism for intelligent contract submission, deployment, use, and logout.
In smart contract auditing, HPB conducts a protective audit that combines automated tool auditing with professional code design. In terms of templates, HPB gradually formed a generic smart contract template to support the flexible configuration of various common business scenarios.
Incentives for a positive cycle of token economy
When the high-contribution value node generates a block, it will receive a token reward from the system. From the design of the HPB, the system will issue a token of no more than 6% per year, and the additional token will be proportional to the total number of high-contribution nodes and candidate nodes.
In order to obtain the token reward from the system, it must first become a high contribution value node, and only the high contribution value node has the right to generate a block.
In order to obtain the right to generate a block, it is necessary to contribute, including holding a certain number of HPB tokens, having a BOE hardware acceleration engine, and contributing network bandwidth to the system.
From its mechanism, we can see that HPB's token economic system design is considered from the formation of a positive incentive system. It maintains the overall HPB system by holding the HPB token, having a BOE hardware acceleration engine, and contributing network bandwidth to the system. safe operation.
HPB landing: supports a variety of high-frequency scenes
In essence, HPB is a high-performance blockchain platform and is an infrastructure where various blockchain applications can be explored. Including blockchain finance, blockchain games, blockchain entertainment, blockchain big data, blockchain anti-fake tracking, blockchain energy and many other fields.
In terms of finance, decentralized lending, decentralized asset management, etc. can all be built on the HPB platform to meet high-frequency lending and transaction scenarios.
In terms of games, although all game operations are not practical, the up-chaining and trading of assets such as game props are important scenes. Once the realization of the game product chain, you can ensure that the game assets are transparent, unique, can not be tampered with, never disappeared, etc., providing great convenience for the transaction between the game products.
Compared with traditional centralized service providers, there are many advantages. For example, there is no need to worry about the loss, confiscation, or change of virtual game products. The transaction process is also simple and convenient. Since HPB has a high-performance blockchain, it is expected to support millions of concurrents, and many high-frequency scenarios can also be satisfied.
For blockchain entertainment, it can support the securitization of star assets, such as star-related token assets. In terms of blockchain big data, it can support the data right, ensure that the data owner controls the data ownership, ensure the authenticity of the data, traceability, can not be modified, and finally realize data transactions according to the needs of different entities. , to ensure personal privacy and data security.
Based on HPB's blockchain infrastructure, based on its high performance, blockchain applications can be built in multiple scenarios. The HPB design provides a blockchain application program interface and application development package. In the HPB blockchain base layer, it provides blockchain data access and interactive interfaces, and supports various applications and development languages ​​using JSON-RPC and RESTful APIs. It also supports multi-dimensional blockchain data query and transaction submission, and the interactive access interface can be integrated with the privilege control system.
The application development package includes comprehensive functional service packages that operate on blockchains based on different development languages. For example, it provides functional interfaces such as encryption, data signature, and transaction generation, and can seamlessly support integration and function expansion of various language service systems. , supports multiple language SDKs such as Java, JavaScript, Ruby, Python, and .NET.
Conclusion
If the future blockchain wants to enter the mainstream population, it must have high-performance public-chain or infrastructure support to form a true application ecosystem. Ethereum's dream to build a decentralized ecosystem cannot be achieved on an existing basis. Ethereum is trying to improve performance and expand scalability through fragmentation, plasma, and pos consensus mechanisms.
At the same time, the current status quo has also spawned other public-linked efforts, including eos, HPB, etc. Among them, HPB has adopted a unique combination of hardware and software, dedicated BOE hardware acceleration, signature verification speed, encryption channel security, data transmission Speed, network performance, and high concurrent support all have their own advantages over simple software solutions.
In the software architecture, consensus algorithms for internal and external elections, flexible virtual machine design, application program interfaces, and development packages are also used to provide infrastructure for the development of blockchain application scenarios.
From the overall design of HPB, its goal is to provide high-performance infrastructure for the entire blockchain to mainstream people. With a high-performance infrastructure, blockchains can only be implemented in many high-frequency scenarios to create more application ecosystems and have the opportunity to reach mainstream people.
The HPB team focused on the technical background, including the founder Wang Xiaoming who was an early evangelist in the blockchain and once participated in the establishment of UnionPay Big Data, Beltal, and Beltal CTO. Co-founder CTO Xu Li has more than 10 years of experience in chip industry R&D and management. He was responsible for the logic design, R&D, and FPGA chip marketing of the core products of the world's top qualified equipment suppliers and the world's largest component distributor. Technical VP Shu Shanlin once worked for Inspur, a well-known Chinese server manufacturer, as an embedded chief engineer, and has extensive R&D experience in embedded software and underlying software. Another co-founder, Li Jinxin, is a former blockchain analyst of Guotai Junan and has extensive experience in digital asset investment.
The background of the team members is in line with the HPB's soft and hard path. According to the latest monthly report, the basic PCB layout design of the BOE board, the overall architecture design of the BOE, and the ECC acceleration scheme have also been completed. At the same time, several tests have been completed for the BOE hardware acceleration engine.
It is hoped that HPB will develop rapidly and will embark on a path with its own characteristics in the future of blockchain infrastructure competition. It will provide support for more decentralized applications and eventually build a prosperous ecosystem.
Risk Warning: All Blue Fox articles do not constitute investment recommendations, investment risks, it is recommended to conduct in-depth inspection of the project, and carefully make their own investment decisions.
Source: https://mp.weixin.qq.com/s/RSuz6R6MTotEL_U__Al_Wg
submitted by azerbajian to HPBtrader [link] [comments]

Biased Nonce Sense: Lattice Attacks against Weak ECDSA Signatures in Cryptocurrencies

Cryptology ePrint Archive: Report 2019/023
Date: 2019-01-08
Author(s): Joachim Breitner, Nadia Heninger

Link to Paper


Abstract
In this paper, we compute hundreds of Bitcoin private keys and dozens of Ethereum, Ripple, SSH, and HTTPS private keys by carrying out cryptanalytic attacks against digital signatures contained in public blockchains and Internet-wide scans. The ECDSA signature algorithm requires the generation of a per-message secret nonce. This nonce must be generated perfectly uniformly, or else an attacker can exploit the nonce biases to compute the long-term signing key. We use a lattice-based algorithm for solving the hidden number problem to efficiently compute private ECDSA keys that were used with biased signature nonces due to multiple apparent implementation vulnerabilities.

References
  1. The most repeated r value on the blockchain. https://bitcointalk.org/index.php?topic=1118704.0 (2015)
  2. Bitcoin wiki: Address reuse. https://en.bitcoin.it/wiki/Address reuse (2018)
  3. Akavia, A.: Solving hidden number problem with one bit oracle and advice. In: Halevi, S. (ed.) Advances in Cryptology - CRYPTO 2009. pp. 337–354. Springer Berlin Heidelberg, Berlin, Heidelberg (2009)
  4. Bartoletti, M., Lande, S., Pompianu, L., Bracciali, A.: A general framework for blockchain analytics. In: Proceedings of the 1st Workshop on Scalable and Resilient Infrastructures for Distributed Ledgers. pp. 7:1–7:6. SERIAL ’17, ACM, New York, NY, USA (2017). https://doi.org/10.1145/3152824.3152831, http://doi.acm.org/10.1145/3152824.3152831
  5. Benger, N., van de Pol, J., Smart, N.P., Yarom, Y.: “Ooh aah... just a little bit”: A small amount of side channel can go a long way. In: Batina, L., Robshaw, M. (eds.) Cryptographic Hardware and Embedded Systems – CHES 2014. pp. 75–92. Springer Berlin Heidelberg, Berlin, Heidelberg (2014)
  6. Boneh, D., Venkatesan, R.: Hardness of computing the most significant bits of secret keys in diffie-hellman and related schemes. In: Koblitz, N. (ed.) Advances in Cryptology — CRYPTO ’96. pp. 129–142. Springer Berlin Heidelberg, Berlin, Heidelberg (1996)
  7. Bos, J.W., Halderman, J.A., Heninger, N., Moore, J., Naehrig, M., Wustrow, E.: Elliptic curve cryptography in practice. In: Christin, N., Safavi-Naini, R. (eds.) Financial Cryptography and Data Security. pp. 157–175. Springer Berlin Heidelberg, Berlin, Heidelberg (2014)
  8. Brengel, M., Rossow, C.: Identifying key leakage of bitcoin users. In: Bailey, M., Holz, T., Stamatogiannakis, M., Ioannidis, S. (eds.) Research in Attacks, Intrusions, and Defenses. pp. 623–643. Springer International Publishing, Cham (2018)
  9. Brown, D.R.L.: SEC 2: Recommended elliptic curve domain parameters. http://www.secg.org/sec2-v2.pdf (2010)
  10. Buterin, V.: Ethereum: A next-generation smart contract and decentralized application platform. https://github.com/ethereum/wiki/wiki/White-Paper (2013)
  11. Castellucci, R., Valsorda, F.: Stealing bitcoin with math (2016), https://news.webamooz.com/wp-content/uploads/bot/offsecmag/151.pdf
  12. Chen, Y., Nguyen, P.Q.: BKZ 2.0: Better lattice security estimates. In: ASIACRYPT. Lecture Notes in Computer Science, vol. 7073, pp. 1–20. Springer (2011)
  13. Courtois, N.T., Emirdag, P., Valsorda, F.: Private key recovery combination attacks: On extreme fragility of popular bitcoin key management, wallet and cold storage solutions in presence of poor rng events. Cryptology ePrint Archive, Report 2014/848 (2014), https://eprint.iacr.org/2014/848
  14. Dall, F., De Micheli, G., Eisenbarth, T., Genkin, D., Heninger, N., Moghimi, A., Yarom, Y.: Cachequote: Efficiently recovering long-term secrets of SGX EPID via cache attacks. IACR Transactions on Cryptographic Hardware and Embedded Systems 2018(2), 171–191 (May 2018). https://doi.org/10.13154/tches.v2018.i2.171-191, https://tches.iacr.org/index.php/TCHES/article/view/879
  15. De Mulder, E., Hutter, M., Marson, M.E., Pearson, P.: Using Bleichenbacher’s solution to the hidden number problem to attack nonce leaks in 384-bit ECDSA. In: Bertoni, G., Coron, J.S. (eds.) Cryptographic Hardware and Embedded Systems - CHES 2013. pp. 435–452. Springer Berlin Heidelberg, Berlin, Heidelberg (2013) Biased Nonce Sense 17
  16. Dierks, T., Rescorla, E.: The Transport Layer Security (TLS) protocol. IETF RFC RFC5246 (2008)
  17. Durumeric, Z., Adrian, D., Mirian, A., Bailey, M., Halderman, J.A.: A search engine backed by Internet-wide scanning. In: 22nd ACM Conference on Computer and Communications Security (Oct 2015)
  18. Heninger, N., Durumeric, Z., Wustrow, E., Halderman, J.A.: Mining your Ps and Qs: Detection of widespread weak keys in network devices. In: Proceedings of the 21st USENIX Security Symposium (Aug 2012)
  19. Howgrave-Graham, N.A., Smart, N.P.: Lattice attacks on digital signature schemes. Designs, Codes and Cryptography 23(3), 283–290 (Aug 2001). https://doi.org/10.1023/A:1011214926272, https://doi.org/10.1023/A:1011214926272
  20. Klyubin, A.: Some SecureRandom thoughts. https://android-developers.googleblog.com/2013/08/some-securerandom-thoughts.html (August 2013)
  21. Lenstra, A.K., Lenstra, H.W., Lovasz, L.: Factoring polynomials with rational coefficients. MATH. ANN 261, 515–534 (1982)
  22. Michaelis, K., Meyer, C., Schwenk, J.: Randomly Failed! The State of Randomness in Current Java Implementations. In: CT-RSA. vol. 7779, pp. 129–144. Springer (2013)
  23. Nakamoto, S.: Bitcoin: A peer-to-peer electronic cash system. http://bitcoin.org/bitcoin.pdf (2009)
  24. National Institute of Standards and Technology: FIPS PUB 180-2: Secure Hash Standard (Aug 2002)
  25. National Institute of Standards and Technology: FIPS PUB 186-4: Digital Signature Standard (DSS) (Jul 2013)
  26. Nguyen, P.Q., Shparlinski, I.E.: The insecurity of the elliptic curve digital signature algorithm with partially known nonces. Designs, Codes and Cryptography 30(2), 201–217 (Sep 2003). https://doi.org/10.1023/A:1025436905711, https://doi.org/10.1023/A:1025436905711
  27. Nguyen, P.Q., Stehl´e, D.: LLL on the average. In: Hess, F., Pauli, S., Pohst, M. (eds.) Algorithmic Number Theory. pp. 238–256. Springer Berlin Heidelberg, Berlin, Heidelberg (2006)
  28. Pollard, J.M.: Monte Carlo methods for index computation (mod p). In: Mathematics of Computation. vol. 32 (1978)
  29. Pornin, T.: Deterministic usage of the digital signature algorithm (DSA) and elliptic curve digital signature algorithm (ECDSA). https://tools.ietf.org/html/rfc6979 (2013)
  30. rico666: Large bitcoin collider. https://lbc.cryptoguru.org/
  31. Schnorr, C.P.: A hierarchy of polynomial time lattice basis reduction algorithms. Theor. Comput. Sci. 53(2-3), 201–224 (Aug 1987). https://doi.org/10.1016/0304-3975(87)90064-890064-8), http://dx.doi.org/10.1016/0304-3975(87)90064-890064-8)
  32. Schnorr, C.P., Euchner, M.: Lattice basis reduction: Improved practical algorithms and solving subset sum problems. Math. Program. 66(2), 181–199 (Sep 1994). https://doi.org/10.1007/BF01581144, http://dx.doi.org/10.1007/BF01581144
  33. Schwartz, D., Youngs, N., Britto, A.: The Ripple protocol consensus algorithm. https://ripple.com/files/ripple consensus whitepaper.pdf (2014), https://ripple.com/files/ripple consensus whitepaper.pdf, accessed: 2016-08-08
  34. Shanks, D.: Class number, a theory of factorization, and genera. In: Proc. of Symp. Math. Soc., 1971. vol. 20, pp. 41–440 (1971)
  35. Team, B.: Android wallet security update. https://blog.blockchain.com/2015/05/28/android-wallet-security-update/
  36. The Sage Developers: SageMath, the Sage Mathematics Software System (Version 8.1) (2017), http://www.sagemath.org
  37. Valsorda, F.: Exploiting ECDSA failures in the bitcoin blockchain. Hack In The Box (HITB) (2014)
  38. Ylonen, T., Lonvick, C.: The Secure Shell (SSH) transport layer protocol. IETF RFC 4253 (2006)
submitted by dj-gutz to myrXiv [link] [comments]

InterValue, a public and functional blockchain designed to surpass Ethereum and EOS

InterValue, a public and functional blockchain designed to surpass Ethereum and EOS

https://preview.redd.it/2evkvv5hx6j11.png?width=836&format=png&auto=webp&s=9d062dc00a27ad3de6e0fb3bbf8c5791dd8d14a4
Bitcoin and its underlying blockchain technology gave birth to a decentralized network with a digital currency that has been running for 9 years without any major incident. One of the only problems of this network lies in the low transaction-processing speed.

The emergence of Ethereum marked the advancement to a new era in the age of blockchain. Ethereum's smart contract technology allows decentralized applications to run on blockchain network. Speed, however, remains a major problem and Ethereum has not been able to achieve real improvements in this field, despite all efforts made by Vitalik Buterin and his team.

EOS, which was created in July 2017, fostered new innovations to increase the processing speed of the network and to create a developer-friendly blockchain environment supposed to make the use of blockchain more widespread.

However, some issues in EOS’ design, such as smart contract vulnerabilities, have triggered skepticism among the public regarding the security of the network and its ability to address some key technical challenges. For now, the community consensus promoted by EOS can solve some problems encountered by most public blockchains with their architecture, but once a flawed smart contract is deployed on the blockchain, it is likely to cause serious damage and even destroy the entire system.

https://preview.redd.it/kzw4s8bdn6j11.png?width=487&format=png&auto=webp&s=f39664290e9f68267aed07cca9f333cb093ff3c7
Smart contract security is a key issue for the whole blockchain industry and requires designing new solutions in this regard, simplifying the complexity of smart contract writing and providing secure smart contract templates.

InterValue – a project focusing on blockchain infrastructure and platform-level core technology – may well be the project that will revolutionize the blockchain space with its superior smart contract environment as it provides several innovative solutions to overcome technical challenges.

InterValue uses a hierarchical approach similar to computer storage architecture in the implementation of smart contract functions. The Moses Virtual Machine (MVM) supports both declarative non-turing complete smart contracts and advanced Turing complete smart contracts.

The InterValue project conducted in-depth discussions on its unique smart contract system. Those discussions are documented here below in the form of a Q&A.

Q: How does the Intervalue Smart Contract find a balance between security and functionality? In particular, how does the team address the problem of potential quantum attacks?

The InterValue team noticed that security and functionality are often contradictory. Therefore, smart contracts are designed to strike a balance between security and functionality.

InterValue uses the unique architecture of its “Moses virtual machine” supporting declarative non-turing complete smart contracts and advanced Turing complete smart contracts. Users can select one of these two types of contracts based on their experience and needs. This allows a good balance between security and functionality, as well as between computational cost-efficiency and complexity. Thus, it can meet diverse needs in terms of transaction characteristics.

https://preview.redd.it/tz4xn2gen6j11.png?width=486&format=png&auto=webp&s=dbefe54d541b36ca0cc7b40e4c066ae33012e87d
Declarative smart contracts are simple to deploy. They are highly secure and their underlying logic is close to that of legal contracts. Advanced Turing complete smart contracts are relatively difficult to deploy and are mainly used to develop DApps with more complex program logic.

For anti-quantum attacks, InterValue also has a solution. Most current blockchain systems use the Elliptic Curve Digital Signature Scheme (ECDSA) and the SHA-1 series of encryption algorithms. However, efficient SHOR attacks can be performed against ECDSA in the context of quantum-attacks.

In order to achieve quantum-resistance, InterValue adopted a new anti-quantum attack cipher algorithm and replaced ECDSA with the NTRUsign-251 signature algorithm. The NTRUsign-251 signature algorithm is a public key cryptography algorithm based on lattice theory. Breaking the security of the signature algorithm requires solving the shortest vector problem in a 502-dimensional integer lattice. The SHOR algorithm attack is invalid is inefficient in this case.

Quantum computers also have no other algorithms to break this security. The best heuristic algorithms are also exponential, and the time complexity of attacking the NTRUsign-251 signature algorithm is about 2^168. Simulteanously, InterValue replaces the current SHA-1 series algorithm with the SHA-3 program's winning algorithm Keccak512. Unlike the classic HASH algorithm, the Keccak512 algorithm uses a sponge structure, which contains many of the latest design concepts and ideas of hash functions and cryptographic algorithms.

Quantum computers have no great advantage in attacking HASH function. At present, the most effective attack method is GROVER algorithm, which can reduce the attack complexity of HASH algorithm from O(2^n) to O( 2^n/2) .
However, the first preimage attack time complexity of the Keccak512 algorithm under quantum-attack is 2^256 and its second preimage attack time complexity is 2^128, which is why NTRUsign251 signature algorithm and Keccak512 algorithm are used. InterValue can effectively resist quantum-attacks.

Q: InterValue implements of smart contract functions in a hierarchical manner. Declare non-Turing complete smart contracts, advanced Turing complete smart contracts as well as other technologies greatly expand area of possibilities for performing transactions, here are more details :

Transaction anonymity and inter-node anonymous communication are also key characteristics of InterValue.

InterValue ensures the anonymous protection of transaction information by making transactions unlinkable and untraceable and constantly improving the anonymity protection.

Unlinkeability: for any two outgoing transactions it is impossible to prove they were sent to the same.

Untraceability: for each incoming transaction all possible senders are equiprobable.

Unlinkability and untraceability are attributes that must be satisfied by blockchains with strong privacy protection. InterValue guarantees unlinkability and untraceability by using one-time secret key and ring signature technology. InterValue designs and implements a zero-knowledge proof model as an optional feature that further enhances transaction anonymity.

The InterValue underlying communication network adopts a P2P architecture and then adds an anonymous access mechanism between nodes to ensure the privacy protection. Anonymous communications between nodes are established through local proxy servers that establish virtual circuits and enable application layer P2P encryption, incremental random path generation and so on. Thus, it ensures that the observer does not know where the data really comes from, and what the real destination of the data is.

https://preview.redd.it/5a7k3b23y6j11.png?width=890&format=png&auto=webp&s=b19a290b4367d14ef6b29a04a490760d11f68f76

Q: Ethereum - the most famous blockchain for smart contracts – has been pointed out for a variety of bugs and vulnerabilities. How does Intervalue learn from Ethereum’s mistakes and how does it intend to surpass Ethereum?

InterValue noticed that Ethereum’s programing language (Solidity) is literal and lacks the tools to perform security checks.
Intervalue's smart contract security principles focus on 3 aspects:
  1. Enhancement of the Virtual Machine security: by designing and implementing the “Moses Virtual Machine” architecture, the sandbox structure is designed to protect smart contracts from malicious attacks from the system layer. The Moses Virtual Machine mainly protects smart contracts through technologies such as process-level security isolation execution environment, resource access whitelist mechanism, and minimum privilege principle.
  2. Formal verification of embedded smart contract code: the embedded verification function of the code is in the compilation link, which enhances the security and control of the smart contract
  3. Updateable smart contracts: a discarding mechanism is applied to smart contracts that are deemed vulnerable. Besides, the newly deployed contracts can inherit the relevant status and data of existing contracts.
Q: Do Ethereum smart contracts inherently have risks? Is the ERC20 system suitable to build token assets? Does InterValue use Bitcoin’s UTXO model? What new developments are there? InterValue offers advanced smart contract functions for off-chain data access. How is that breakthrough?
According to the InterValue team, two main risks weigh on Ethereum smart contracts. One is the vulnerability caused by the flaw in the design of the contract language itself; the other is the coder's unfamiliarity with the smart contract language. Recently, some Tokens using the ERC20 standard have experienced serious problems, making the industry doubt whether ERC20 is suitable for storing important token assets.
InterValue is designed to integrate UTXO and account-based transactions for efficiency and security considerations. Intervalue's advanced smart contracts for off-chain data access have broken through two main points: safe off-chain data access and safe use of off-chain data.
Safe off-chain data access is mainly realized through the built-in off-chain data access protocol and constructing an internal distributed data storage platform. Data can be read and written through the sandbox mode, and the security of off-chain data is checked before it is read. These features reinforce both security and convenience.
Q: High costs are also a problem in today's blockchain space. As a highly practical decentralized distributed application development platform, how revolutionary is EOS and why is it called blockchain 3.0?
Intervalue allows the deployment of applications that are as simple as today's Internet and mobile applications but that are made possible by a sophisticated blockchain infrastructure. It simplifies the configuration issues with significant consequences on the public.
On the other hand, each application on Intervalue’s blockchain can set gas costs and fees according to its own needs. This greatly facilitates developers who need to build low-cost services flexibly.
In contrast to the Internet, blockchain technology should be able to support free applications. Making the blockchain free to use is the key to its widespread adoption. A free platform will also enable developers and businesses to create valuable new services.
Compared with EOS’s blockchain 3.0, Intervalue's advantages are mainly reflected in the following aspects: the design is oriented to the practical blockchain 4.0 infrastructure and the technical features have a more-advanced design. The platform is designed to support large-scale popular applications.
Blockchain technology is still far from perfect and the difference between different blockchains is pretty tiny. Between the development phase and the construction of the ecosystem, there will not be many effective users. With continuous development and experience, the InterValue project will become more and more competitive in the future.
Projects with the strongest popularity one are not necessarily the best projects.
Considering its technical strength, transparency, infrastructure and current market conditions, InterValue has the opportunity to become the top next-generation blockchain project. If it can successfully achieve its goals, solve cross-chain transactions and double-layer architecture, problems such as transaction speed will have an important and historical significance for the blockchain world.
The InterValue project has great potential. It could definitely compete with EOS in terms of investment potential.
submitted by intervalue to u/intervalue [link] [comments]

InterValue: Analysis Of A New Anti-quantum Attack Cipher Algorithm

InterValue: Analysis Of A New Anti-quantum Attack Cipher Algorithm
https://preview.redd.it/50gpnoe1wdl11.jpg?width=900&format=pjpg&auto=webp&s=c636ddc4a1c49658cba067084009e557a113b8a8

InterValue aims to provide a global value Internet infrastructure. In response to deal with various problems that existing in the present blockchain infrastructure, InterValue optimizes the protocol and mechanism of blockchain technology at all levels, which can achieve the support agreement of value transmission network. At present, the InterValue 2.0 testnet has been released, we designed and implemented a new HashNet consensus mechanism. Transaction speed within one single shard exceeds 280,000 TPS and 4 million TPS for the whole network. Security (anti-quantum attack characteristics) is undoubtedly the highlight of InterValue under the goal of establishing a low-level infrastructure for the whole field of ecology.
What is the quantum attack?
Quantum computing is a new way of building computers—using the quantum properties of particles to perform operations on data, it is probably the same way as traditional computers. In some cases, the amount of algorithmic acceleration is unusual. It is this characteristics that makes some difficult problems that exist in the electronic computer environment become easy to calculate in the quantum computer. This superior computing power of quantum computers has influenced the security of existing public key cryptography which based on computational complexity. This is the quantum attack.
What does anti-quantum attack mean?
Algorithms have always been the underlying core of blockchain technology. Most of the current algorithms are unable to withstand quantum attacks. It means that all the information of the user will be exposed to the quantum computer. If you have an anti-quantum attack algorithm, it means that the personal information is safe, at least with current technology, it cannot be cracked. Anti-quantum attack algorithms mean security. The impact of quantum attacks on digital currencies is devastating. Quantum attacks directly disrupt existing information security systems. Quantum attacks will expose the assets in the digital industry, including the benefits of mining; the keys to your wallet will be cracked and the wallet will no longer be secure. Totally, the existing security system will be disintegrated. Therefore, it is imperative to develop anti-quantum attack algorithms in advance. It is a necessary technical means to firmly protect the privacy of users.
InterValue uses a new anti-quantum attack cryptographic algorithm at the anti-quantum attack level. By replacing the ECDSA signature algorithm with the NTRUsign signature algorithm that based on the integer lattice, and replacing the existing SHA series algorithm with the Keccak-512 hash algorithm, the speed, and threats of the rapid quantum computation decrease.

Adopt NTRUsign digital signature algorithm
Current ECDSA signature algorithm
The current blockchain mainly uses the ECDSA digital signature algorithm based on elliptic curve. The signature algorithm: First, the public-private key pair needs to be generated, the private key user keeps it, the public key can be distributed to other people; secondly, the private key pair can be used and a specific message is signed; finally, the party that owns the signature public key is able to verify the signature. ECDSA has the advantages of small system parameters, fast processing speed, small key size, strong anti-attack and low bandwidth requirements. However, the quantum computer can implement a very efficient SHOR attack algorithm by ECDSA signature algorithm, and the ECDSA signature algorithm cannot resist the quantum attack.
Adopt new NTRUsign-251 signature algorithm
At present, the public key cryptosystem against quantum SHOR algorithm attacks mainly includes public key cryptography that based on lattice theory, code-based public key system represented by McEliece public key cryptosystem and multivariate polynomial represented by MQ public key cryptography. The security of McEliece public key cryptosystem is based on the error correction code problem, which is strong in security but low in computational efficiency. The MQ public key cryptosystem, that is, the multivariate quadratic polynomial public key cryptosystem, based on the intractability of the multivariate quadratic polynomial equations on the finite field, has obvious disadvantages in terms of security. In contrast, the public key encryption system based on lattice theory is simple, fast, and takes up less storage space. InterValue uses the signature algorithm based on the lattice theory NTRUSign-251. The specific implementation process of the algorithm is as follows:

https://preview.redd.it/byyzx8k3wdl11.png?width=762&format=png&auto=webp&s=d454123cabbe730271b66362a55e17b861ad50b4
It has been proved that the security of the NTRUSign-251 signature algorithm is ultimately equivalent to finding the shortest vector problem in a 502-dimensional integer lattice, but the SHOR attack algorithm for the shortest vector problem in the lattice is invalid, and there is no other fast solutions under the quantum computer. The best heuristic algorithm is also exponential, and the time complexity of attacking NTRUSign-251 signature algorithm is about 2168. Therefore, InterValue uses NTRUSign-251 algorithm that can resist SHOR algorithm attack under quantum computing.

Adopt Keccak512 hash algorithm
The common anti-quantum hash algorithm
The most effective attack methods for hash algorithm under quantum computer is GROVER algorithm, which can reduce the attack complexity of Hash algorithm from O (2^n) to O (2^n/2). Therefore, the current bit adopts the Hash algorithm PIREMD160 whose output length is only 160 bits, under this circumstance, quantum attacks algorithm used in the currency system is not safe. An effective way of resisting quantum attacks is to reduce the threat of the GROVER algorithm by increasing the output length of the Hash algorithm. It is generally believed that the Hash algorithm can effectively resist quantum attacks as long as the output length of the hash algorithm is not less than 256 bits. In addition to the threat of quantum attacks, a series of hash functions that are widely used in practice, such as MD4, MD5, SHA-1, and HAVAL, are attacked by traditional methods such as differential analysis, modulo difference, and message modification methods. Therefore, blockchains’ Hash algorithm also needs to consider the resistance of traditional attacks.
Winning the hash algorithm Keccak512
Early blockchain projects such as Bitcoin, Litecoin, and Ethereum used SHA series Hashing algorithms that exist design flaws (but not fatal). Recently, new blockchain projects have been adopted by the National Institute of Standards and Technology. The SHA-3 plan series algorithm is a new Hash algorithm.
InterValue adopts the SHA-3 plan's winning algorithm Keccak512, which contains many latest design concepts and ideas of hash function and cryptographic algorithm. It is simple in design, which is convenient for hardware implementation. The algorithm was submitted by Guido Bertoni, Joan Daemen, Michael Peters, and Giles Van Assche in October 2008. The Keccak512 algorithm uses a standard sponge structure that maps input bits of arbitrary length into fixed-length output bits. The speed is fast, with an average speed of 12.5 cycles per byte under the Intel Core 2 processor.

https://preview.redd.it/z0nnrjp4wdl11.jpg?width=724&format=pjpg&auto=webp&s=bef29aafeb1ef74b21bacb6db3f07987bf0a7ba5
As shown in the figure, in the absorption phase of the sponge structure, each message packet is XORed with the r bits inside the state, and then encapsulated into 1600 bits of data together with the fixed c bits to perform the round function f processing, and then into the squeeze. In the extrusion phase, a hash of n-bit fixed output length can be generated by iterating 24 cycles. Each loop R has only the last step round constant, but the round constant is often ignored in collision attacks. The algorithm proved to have good differential properties, and until now third-party cryptanalysis did not show that Keccak512 has security weaknesses. The first type of original image attack complexity for the Keccak512 algorithm under quantum computer is 2^256, and the second type of original image attack complexity for the Keccak512 algorithm is 2^128, so InterValue combined with the Keccak512 algorithm can resist the GROVER algorithm attack under quantum computing.

Written in the end
Quantum computing has gone through 40 years from the theory to practice. From the emergence to the present, it has entered the stage of quantitative change to qualitative change in technology accumulation, business environment, and performance improvement. For the blockchain, the most deadly part is not investor's doubt, but the accelerated development of quantum computers. In the future, quantum computers are most likely to subvert the traditional technical route of classical computing and have a larger field of development. We are sympathetic to its destructive power to the existing blockchain, and we look forward to helping the entire blockchain industry to shape a new ecosystem. On the occasion of entering the new "quantum era, trusting society", the InterValue team believes that only by fully understanding the essence of quantum cryptography (quantum communication) and anti-quantum cryptography, can we calmly stand on a high level and arrange the outline.
submitted by intervalue to InterValue [link] [comments]

InterValue: Analysis Of A New Anti-quantum Attack Cipher Algorithm

InterValue: Analysis Of A New Anti-quantum Attack Cipher Algorithm
https://preview.redd.it/pl9ytli1smd11.jpg?width=900&format=pjpg&auto=webp&s=afd90001218bb19c252f927ef2e292cb788c9a9d
InterValue aims to provide a global value Internet infrastructure. In response to deal with various problems that existing in the present blockchain infrastructure, InterValue optimizes the protocol and mechanism of blockchain technology at all levels, which can achieve the support agreement of value transmission network. At present, the InterValue 2.0 testnet has been released, we designed and implemented a new HashNet consensus mechanism. Transaction speed within one single shard exceeds 280,000 TPS and 4 million TPS for the whole network. Security (anti-quantum attack characteristics) is undoubtedly the highlight of InterValue under the goal of establishing a low-level infrastructure for the whole field of ecology.
What is the quantum attack?
Quantum computing is a new way of building computers—using the quantum properties of particles to perform operations on data, it is probably the same way as traditional computers. In some cases, the amount of algorithmic acceleration is unusual. It is this characteristics that makes some difficult problems that exist in the electronic computer environment become easy to calculate in the quantum computer. This superior computing power of quantum computers has influenced the security of existing public key cryptography which based on computational complexity. This is the quantum attack.
What does anti-quantum attack mean?
Algorithms have always been the underlying core of blockchain technology. Most of the current algorithms are unable to withstand quantum attacks. It means that all the information of the user will be exposed to the quantum computer. If you have an anti-quantum attack algorithm, it means that the personal information is safe, at least with current technology, it cannot be cracked. Anti-quantum attack algorithms mean security. The impact of quantum attacks on digital currencies is devastating. Quantum attacks directly disrupt existing information security systems. Quantum attacks will expose the assets in the digital industry, including the benefits of mining; the keys to your wallet will be cracked and the wallet will no longer be secure. Totally, the existing security system will be disintegrated. Therefore, it is imperative to develop anti-quantum attack algorithms in advance. It is a necessary technical means to firmly protect the privacy of users.
InterValue uses a new anti-quantum attack cryptographic algorithm at the anti-quantum attack level. By replacing the ECDSA signature algorithm with the NTRUsign signature algorithm that based on the integer lattice, and replacing the existing SHA series algorithm with the Keccak-512 hash algorithm, the speed, and threats of the rapid quantum computation decrease.
Adopt NTRUsign digital signature algorithm
Current ECDSA signature algorithm
The current blockchain mainly uses the ECDSA digital signature algorithm based on elliptic curve. The signature algorithm: First, the public-private key pair needs to be generated, the private key user keeps it, the public key can be distributed to other people; secondly, the private key pair can be used and a specific message is signed; finally, the party that owns the signature public key is able to verify the signature. ECDSA has the advantages of small system parameters, fast processing speed, small key size, strong anti-attack and low bandwidth requirements. However, the quantum computer can implement a very efficient SHOR attack algorithm by ECDSA signature algorithm, and the ECDSA signature algorithm cannot resist the quantum attack.
Adopt new NTRUsign-251 signature algorithm
At present, the public key cryptosystem against quantum SHOR algorithm attacks mainly includes public key cryptography that based on lattice theory, code-based public key system represented by McEliece public key cryptosystem and multivariate polynomial represented by MQ public key cryptography. The security of McEliece public key cryptosystem is based on the error correction code problem, which is strong in security but low in computational efficiency. The MQ public key cryptosystem, that is, the multivariate quadratic polynomial public key cryptosystem, based on the intractability of the multivariate quadratic polynomial equations on the finite field, has obvious disadvantages in terms of security. In contrast, the public key encryption system based on lattice theory is simple, fast, and takes up less storage space. InterValue uses the signature algorithm based on the lattice theory NTRUSign-251. The specific implementation process of the algorithm is as follows:
https://preview.redd.it/uzuqi589smd11.png?width=762&format=png&auto=webp&s=29670c99027fdcebadca64730ef2e3862f960192
It has been proved that the security of the NTRUSign-251 signature algorithm is ultimately equivalent to finding the shortest vector problem in a 502-dimensional integer lattice, but the SHOR attack algorithm for the shortest vector problem in the lattice is invalid, and there is no other fast solutions under the quantum computer. The best heuristic algorithm is also exponential, and the time complexity of attacking NTRUSign-251 signature algorithm is about 2168. Therefore, InterValue uses NTRUSign-251 algorithm that can resist SHOR algorithm attack under quantum computing.
Adopt Keccak512 hash algorithm
The common anti-quantum hash algorithm
The most effective attack methods for hash algorithm under quantum computer is GROVER algorithm, which can reduce the attack complexity of Hash algorithm from O (2^n) to O (2^n/2). Therefore, the current bit adopts the Hash algorithm PIREMD160 whose output length is only 160 bits, under this circumstance, quantum attacks algorithm used in the currency system is not safe. An effective way of resisting quantum attacks is to reduce the threat of the GROVER algorithm by increasing the output length of the Hash algorithm. It is generally believed that the Hash algorithm can effectively resist quantum attacks as long as the output length of the hash algorithm is not less than 256 bits. In addition to the threat of quantum attacks, a series of hash functions that are widely used in practice, such as MD4, MD5, SHA-1, and HAVAL, are attacked by traditional methods such as differential analysis, modulo difference, and message modification methods. Therefore, blockchains’ Hash algorithm also needs to consider the resistance of traditional attacks.
Winning the hash algorithm Keccak512
Early blockchain projects such as Bitcoin, Litecoin, and Ethereum used SHA series Hashing algorithms that exist design flaws (but not fatal). Recently, new blockchain projects have been adopted by the National Institute of Standards and Technology. The SHA-3 plan series algorithm is a new Hash algorithm.
InterValue adopts the SHA-3 plan's winning algorithm Keccak512, which contains many latest design concepts and ideas of hash function and cryptographic algorithm. It is simple in design, which is convenient for hardware implementation. The algorithm was submitted by Guido Bertoni, Joan Daemen, Michael Peters, and Giles Van Assche in October 2008. The Keccak512 algorithm uses a standard sponge structure that maps input bits of arbitrary length into fixed-length output bits. The speed is fast, with an average speed of 12.5 cycles per byte under the Intel Core 2 processor.
https://preview.redd.it/zwfzybeasmd11.jpg?width=724&format=pjpg&auto=webp&s=e0710e7fb1f80b7aa6517a296e2cadd6a51bd4c8
As shown in the figure, in the absorption phase of the sponge structure, each message packet is XORed with the r bits inside the state, and then encapsulated into 1600 bits of data together with the fixed c bits to perform the round function f processing, and then into the squeeze. In the extrusion phase, a hash of n-bit fixed output length can be generated by iterating 24 cycles. Each loop R has only the last step round constant, but the round constant is often ignored in collision attacks. The algorithm proved to have good differential properties, and until now third-party cryptanalysis did not show that Keccak512 has security weaknesses. The first type of original image attack complexity for the Keccak512 algorithm under quantum computer is 2^256, and the second type of original image attack complexity for the Keccak512 algorithm is 2^128, so InterValue combined with the Keccak512 algorithm can resist the GROVER algorithm attack under quantum computing.
Written in the end
Quantum computing has gone through 40 years from the theory to practice. From the emergence to the present, it has entered the stage of quantitative change to qualitative change in technology accumulation, business environment, and performance improvement. For the blockchain, the most deadly part is not investor's doubt, but the accelerated development of quantum computers. In the future, quantum computers are most likely to subvert the traditional technical route of classical computing and have a larger field of development. We are sympathetic to its destructive power to the existing blockchain, and we look forward to helping the entire blockchain industry to shape a new ecosystem. On the occasion of entering the new "quantum era, trusting society", the InterValue team believes that only by fully understanding the essence of quantum cryptography (quantum communication) and anti-quantum cryptography, can we calmly stand on a high level and arrange the outline.
submitted by intervalue to u/intervalue [link] [comments]

InterValue: From Blockchain to Value Interconnect

https://preview.redd.it/k7ypmeqi1b611.jpg?width=1000&format=pjpg&auto=webp&s=187bbc2ead99a46ce258c5d2a15363637a9e393d
If not the sudden surge of the bitcoin, we may not know what the blockchain is, in other words, the bitcoin which exists almost ten years and was little known, then one day was mush rushed and let someone gets rich. Thus undoubtedly makes much concentration about the blockchain technology, people began to do research on it and found that the blockchain actually has the potential to change the world of the future. The speculators flocked to the blockchain, thus makes the situation that we familiar with.
Refer to the blockchain, most people just know little about it not the real gift at it. So even many people want to do research on it carefully but can not figure it. Thus we can see many people try to use lots of ways to give an explanation of the concept of a popular blockchain to netizens.
In a narrow sense, a blockchain is a chained data structure in which data blocks are sequentially connected in a time-ordered manner, and cryptography ensured non-falsifiable and unforgeable distributed ledgers.
Broadly speaking, blockchain technology is a completely new distributed infrastructure and computing paradigm that uses blockchain data structures to verify and store data, uses distributed node consensus algorithms to generate and update data, uses cryptography to ensure data transmission and access, uses cryptography to secure data transmissions and access, and uses intelligent contracts composed of automated scripting code to program and manipulate data.
Blockchain technology is not only a fixed form that we familiar with, but it has passed many iterations since its beginning and is still undergoing continuous improvement and development.
01. Blockchain1.0: Cryptocurrency.
In early 2009, the Bitcoin network was officially launched. As a virtual currency system, the total amount of bitcoin is defined by network consensus protocol. No individual or institution can freely modify the supply and transaction records therein. The underlying technology of Bitcoin, the Blockchain, actually an extremely ingenious distributed shared ledger and peer-to-peer value transfer technology that has the potential to affect as much as the invention of double-entry bookkeeping.
02. Blockchain 2.0: Smart contracts.
Around 2014, industry community began to recognize the importance of Blockchain technology, and create a common technology platform to provide developers with BaaS (Blockchain as a service), which greatly improve the transaction speed, reduce resource consumption. Ethereum, the smart contract initiator, took the blockchain into the 2.0 era. The Blockchain 2.0 application incorporates the concept of “smart contracts” (using program algorithms instead of human execution contracts). This allows the blockchain to expand from the initial currency system to the registration and transfer of equity, claims and property rights, the trading and execution of securities and financial contracts, and even financial sectors such as gaming and anti-counterfeiting.
03. Blockchain 3.0: Blockchain technology application.
After 2015, with the rise of Blockchain 3.0 technology based on DAG data structures, such as Byteball and IOTA, Blockchain systems are more efficient, scalable, highly interoperable, and offer a better us experience than before. Applications of Blockchain gradually extend to healthcare,
IP copyright, education, and IOT. Broader applications such as sharing economy, communications, social management, charity, culture, and entertainment.
04. Blockchain 4.0: Blockchain ecosystem.
Recently, Blockchain 4.0 technology based on Hashgraph data structure has gradually attracted the attention of industry community. The consensus algorithm based on Hashgraph can achieve a qualitative growth in transaction throughput and scalability. The Blockchain will become the infrastructure of industry and form a consolidated ecosystem, which also changes people’s lifestyle extensively and profoundly.
05. From Blockchain to value interconnect
The Internet has realized that some information in the human society is transferred and shared on the Internet and creates information interconnect. The blockchain will realize the transfer and sharing of all information including all digital assets and real assets of the human society on the value-based Internet, creating value. interconnected. The realization of value interconnect, namely the realization of various underlying network protocols for value transmission networks, the construction of global value Internet, and the provision of basic networks for various types of value transmission applications, is the great vision of the InterValue project.
Key technologies in platform and features in Blockchain infrastructure are the topmost focus for InterValue. The features include the anonymous P2P protocols, a novel anti-quantum hash algorithm and a novel signature algorithm, a unique double-layer consensus and mining mechanism for transactional anonymous protection, a Turing complete smart contracts, etc. It uses fair distribution mechanism to support third-party asset distribution, cross-chain communications, multi-chain merging functions such as public chain, permissioned chain, consortium chain and mother forms fall into the practical application of the Blockchain 4.0 infrastructure.
In order to create a completely decentralized network value transmission ecosystem and completely open community ecosystem, InterValue has made significant improvements in all aspects of the blockchain infrastructure and has made breakthrough innovations at some levels.
Major technological innovation include:
  1. Consensus: we design an efficient and secure double-layer consensus mechanism consisting of HashNet consensus and BA-VRF (Byzantine Agreement based on Verifiable Random Function) consensus, which supports high transaction concurrency, fast confirmation and building eco-systems for different application scenarios.
  2. Crossing and merging chains, we adopt chain-relaying technology to solve the problems in crossing chains transaction and transparent operations among multiple chains, which not only can maintain the independence of crossing chains operation, but also reuses various functions of InterValue.
  3. Industrial application, we design lots of industrial common interfaces in form of JSON-RPC, satisfying diƈerent scenarios such as circulation payment data transmission, data search, and contract invocation.
  4. Smart contracts, we design Moses virtual machine (MVM) which supports declarative non-Turing complete contract as well as advanced Turing complete contract programmed in Moses language. MVM is able to access oƈ-Blockchain data conveniently and securely, and supports the issuance of third-party assets, which can be integrated into applications in terms of public, permissioned (private) or consortium (hybrid) Blockchain.
  5. Anti-quantum attack, new anti-quantum algorithms are devised, which replaces existing SHA series algorithm with the Keccak-512 hash algorithm, and replaces ECDSA signature algorithm with an integer lattice-based NTRUsign signature algorithm. These algorithms reduce the threat coming from development of quantum computing and gradual popularization of the quantum computer.
  6. Transaction anonymity, based on anonymity characteristics of cryptocurrency such as Monero and ZCash, zero-knowledge proof and ring signature are applied to transaction anonymity and privacy protection, which performs with high cost-effective ratio and excellent security to satisfy privacy requirements in different application scenarios.
  7. Underlying P2P network, combining the advantages of Tor-based anonymity and Blockchain-based distributed VPN, we design a novel anonymous P2P overlay network, including anonymous access method and encrypted communication protocol, which greatly enhances anonymity of nodes in the network and ensures that it’s hard to trace node address and to crack communication protocol.
  8. Ecological motivation, various token allocation methods are used, which support double-layer mining for incentives.
InterValue Key Features Design
In the world where InterValue is widely used, people can realize automatic payment, automatic evaluation, automatic preservation, and automatic judgment of legality in any behaviors and activities. People can choose whether or not their lifelong activities and activities are preserved. A virtual person who possesses the individual’s complete awareness and complete autonomous intelligence are born. After the people completely transfer the various assets in real life to the chain, the virtual person representing a personal social entity will be together with the individual assets, and will be kept on the chain forever. Evolution has continued to realize the virtual eternal life of human society, so that each participant can obtain a corresponding value manifestation and lead human beings to the world of value interconnect.
submitted by intervalue to u/intervalue [link] [comments]

Hack bitcoin (private script) 2019 Elliptic Curve Digital Signature Algorithm (ECDSA) (Money Button Documentation Series) Introduction to Bitcoin with Yours Bitcoin, Lecture 5: ECDSA Bitcoin 101 Elliptic Curve Cryptography Part 5 The Magic of Signing & Verifying Bitcoin 101 - Elliptic Curve Cryptography - Part 4 ...

By message I mean any data from text to binary that needs to be authenticated. Specifically, Bitcoin clients produce signatures to authenticate their transactions, whereas miners verify such signatures to authorize an. trending ; Ecdsa In Blockchain Cryptocurrency . Ecdsa In Blockchain . Apr 8, 2018 DTN Staff. twitter. pinterest. google plus. facebook. Elliptic-curve Digital Signatures ... The ECDSA signature method is the elliptic curve equivalent of the DSA method and is used extensively with Bitcoin methods. With this, we create a private key (priv) and then generate a public key ... [Figure 2] If Bob encrypts a message with Alice’s public key, only Alice’s private key can decrypt the message. This principle is what allows the SSH protocol to authenticate identity. If Alice (client) can decrypt Bob’s (server) message, then it proves Alice is in possession of the paired private key. This is, in theory, how SSH keys authentication should work. Unfortunately with the ... ECDH key exchange with remote party. from ecdsa import ECDH, NIST256p ecdh = ECDH (curve = NIST256p) ecdh. generate_private_key local_public_key = ecdh. get_public_key #send `local_public_key` to remote party and receive `remote_public_key` from remote party with open ("remote_public_key.pem") as e: remote_public_key = e. read ecdh. load ... To verify with ECDSA, the algorithm takes public key, data, and a signature (r, s). The data is hashed before it is verified. If the signature is verified, the verifier can know that only someone who possesses the private key may have produced the signature. Signing and Verifying with bsv. bsv has an object called ECDSA accessible at bsv.crypto ...

[index] [15496] [40578] [7209] [41102] [16793] [24700] [12696] [24703] [44178] [43087]

Hack bitcoin (private script) 2019

This video shows how easy it is to paste, verify, and sign a message using an ECDSA private key behind a Bitcoin address. Bitcoin 101 - Elliptic Curve Cryptography - Part 4 - Generating the Public Key (in Python) - Duration: 21:22. CRI 26,257 views. 21:22. ... Getting the ECDSA Z Value from a Bitcoin Single Input Transaction - Duration: 6:43. seanwasere ytbe 7,177 views. 6:43 . Public key cryptography - Diffie-Hellman Key Exchange (full version ... Bitcoin 101 - Quindecillions & The Amazing Math Of Bitcoin's Private Keys - Duration: 23:51. ... Fast Secure Two Party ECDSA Signing - Duration: 22:08. TheIACR 1,663 views. 22:08 . How Bitcoin ... We are covering all of the material a developer needs to know to be able to build applications on the Bitcoin SV (BSV) blockchain, with or without Money Button. This documentation assumes the ...

#