Was bedeutet Block? BTC-ACADEMY - News zu Bitcoin ...

What I currently use for privacy (after almost 2 years of long investing into it)

First of all, my threat model: I'm just an average person that wants to AVOID the maximum I can to be monitored and tracked by the government and big corps, a lot of people out there REALLY hate me and I've gone through lots of harassment and other stuff, I also plan to take my activism and love for freedom more seriously and to do stuff that could potentially lead me to very high danger or even put my life on the line. That being said, my main focus is on something that is privacy-friendly but also something with decent security (no point having a lot of privacy if a script kiddie can just break into it an boom, everything is gone) anonymity is also desirable but I'm pretty aware that true 100% anonymity is simply not possible and to achieve the maximum you can of it currently you'd have to give up A LOT of stuff in which I don't think I really could. So basically, everything that I said + I don't want to give up some hobbies of mine (as playing games etc)
Here's what I use/have done so far, most of it is based on privacytools.io list and research I've done.
Mobile:
Google Pixel 3a XL running GrapheneOS
Apps: Stock apps (Vanadium, Gallery, Clock, Contacts etc) + F-DROID, NewPipe, OsmAnd+, Joplin, Tutanota, K-9 Mail, Aegis Authenticator, KeePassDX, Syncthing, Signal, Librera PRO, Vinyl, Open Camera and Wireguard.
I also use BlahDNS as my private DNS.
Other smartphone stuff/habits: I use a Supershieldz Anti Spy Tempered Glass Screen Protector on my phone and I also have a Faraday Sleeve from Silent Pocket which my phone is on most of the times (I don't have smartphone addiction and would likely advice you to break free from smartphone addiction if you have it). I NEVER use bluetooth (thank god Pixel 3a have a headphone jack so yeah, no bluetooth earphones here) and always keep my Wi-Fi off if I'm not using it.
Computer:
I have a desktop that I built (specs: Asus B450M Gaming, AMD Ryzen 3 3300X, Radeon RX 580 8GB, 16GB DDR4 2666Mhz, 3TB HDD, 480GB SSD) that is dualbooted with QubesOS and Arch Linux.
Qubes is my main OS that I use as daily driver and for my tasks, I use Arch for gaming.
I've installed linux-hardened and its headers packages on my Arch + further kernel hardening using systctl and boot parameters, AppArmor as my MAC system and bubblewrap for sandboxing programs. I also spoof my MAC address and have restricted root access, I've also protected my GRUB with password (and use encrypted boot) and have enabled Microcode updates and have NTP and IPV6 disabled.
Also on Arch, I use iptables as a firewall denying all incoming traffic, and since it's my gaming PC, I don't game on the OS, instead, I use a KVM/QEMU Windows VM for gaming (search "How I Built The "Poor-Shamed" Computer" video to see what I'm talking about) I also use full disk encryption.
Software/Providers:
E-Mails: I use ProtonMail (Plus Account paid with bitcoin) and Tutanota (free account as they don't accept crypto payment yet, come on Tutanota, I've been waiting for it for 2 years already) since I have plus account on ProtonMail it allows me to use ProtonMail Bridge and use it on Claws Mail (desktop) and K-9 Mail (mobile) as for Tutanota I use both desktop and mobile app.
Some other e-mails habits of mine: I use e-mail aliases (ProtonMail plus account provides you with 5) and each alias is used for different tasks (as one for shopping, one for banking, one for accounts etc) and none of my e-mails have my real name on it or something that could be used to identify me. I also highly avoid using stuff that require e-mail/e-mail verification for usage (e-mail is such a pain in the ass tbh) I also make use of Spamgourmet for stuff like temporary e-mail (best service I found for this doing my research, dunno if it's really the best tho, heard that AnonAddy does kinda the same stuff but dunno, recommendations are welcomed)
Browsers/Search Engine: As mentioned, I use Vanadium (Graphene's stock browser) on mobile as it is the recommended browser by Graphene and the one with the best security for Android, for desktop I use a Hardened Firefox (pretty aware of Firefox's security not being that good, but it's the best browser for PC for me as Ungoogled Chromium is still not there in A LOT of things + inherent problems of Chrome as not being able to disable WebRTC unless you use an extension etc) with ghacks-user.js and uBlock Origin (hard mode), uMatrix (globally blocking first party scripts), HTTPS Everywhere (EASE Mode), Decentraleyes (set the recommended rules for both uBlock Origin and uMatrix) and Temporary Containers as addons. I also use Tor Browser (Safest Mode) on a Whonix VM on Qubes sometimes. DuckDuckGo is my to-go search engine and I use DNS over HTTPS on Firefox (BlahDNS as my provider once again)
browsing habits: I avoid JavaScript the maximum I can, if it's really needed, I just allow the scripts temporarely on uBlock Origin/uMatrix and after I'm done I just disable it. I also generally go with old.reddit.com instead of reddit.com (as JavaScript is not required to browse the old client), nitter.net for checking twitter stuff (although I rarely have something peaking my interest on Twitter) and I use invidious.snopyta.org as youtube front-end (I do however use YouTube sometimes if a video I wanna see can't be played on invidious or if I wanna watch a livestream) and html.duckduckgo.com instead of duckduckgo.com other than avoiding JavaScript most of my browsing habits are just common sense at this point I'd say, I also use privatebin (snopyta's instance) instead of pastebin. I also have multiple firefox profiles for different tasks (personal usage, shopping, banking etc)
VPN: I use Mullvad (guess you can mention it here since it's PTIO's recommended) paid with bitcoin and honestly best service available tbh. I use Mullvad's multihop implementation on Wireguard which I manually set myself as I had the time and patience to learn how.
password manager: KeePassXC on desktop and KeePassDX on my smartphone, my password database for my desktop is stored on a USB flash driver I encrypted with VeraCrypt.
some other software on desktop: LibreOffice (as a Microsoft Office substitute), GIMP (Photshop substitute), Vim (I use it for multiple purposes, mainly coding IDE and as a text editor), VLC (media player), Bisq (bitcoin exchange), Wasabi (bitcoin wallet), OBS (screen recording), Syncthing (file sync), qBitTorrent (torrent client) and Element (federated real-time communication software). I sadly couldn't find a good open-source substitute to Sony Vegas (tested many, but none was in the same level of Vegas imo, KDENLive is okay tho) so I just use it on a VM if I need it (Windows VM solely for the purpose of video editing, not the same one I use for gaming)
Other:
router: I have an Asus RT-AC68U with OpenWRT as its firmware. I also set a VPN on it.
cryptocurrency hardware wallet: I store all of my cryptocurrency (Bitcoin and Monero) on a Ledger Nano S, about 97% of my money is on crypto so a hardware wallet is a must for me.
I have lots of USB flash drivers that I use for Live ISOs and for encrypted backups. I also have a USB Data Blocker from PortaPow that I generally use if I need to charge my cellphone in public or in a hotel while on a trip (rare occasion tbh).
I have a Logitech C920e as webcam and a Blue Yeti microphone in which I never let them plugged, I only plug them if it's necessary and after I'm done I just unplug them.
I also have a Nintendo Switch Lite as a gaming console that I most of the times just use offline, I just connect to the internet if needed for a software update and then just turn the Wi-Fi off from it.
Other Habits/Things I've done:
payments: I simply AVOID using credit card, I try to always pay on cash (I live in a third-world country so thank god most of people here still depend on cash only) physically and online I try my best to either by using cryptocurrency or using gift cards/cash by mail if crypto isn't available. I usually buy crypto on Bisq as I just don't trust any KYC exchange (and neither should you) and since there aren't many people here in my area to do face to face bitcoin trade (and I'm skeptical of face to face tbh), I use the Wasabi Wallet (desktop) to coinjoin bitcoin before buying anything as this allows a bit more of privacy, I also coinjoin on Wasabi before sending my bitcoins to my hardware wallet. I also don't have a high consumerism drive so I'm not constantly wanting to buy everything that I see (which helps a lot on this criteria)
social media/accounts: as noted, aside from Signal and Element (which I don't even use that often) I just don't REALLY use any social media (tried Mastodon for a while but I was honestly felt it kinda desert there and most of its userbase from what I've seen were some people I'd just... rather don't hang with tbh) and, althoug not something necessary is something that I really advise people to as social media is literally a poison to your mind.
I also don't own any streaming service like Netflix/Amazon Prime/Spotify etc, I basically pirate series/movies/songs and that's it.
I've also deleted ALL my old accounts from social media (like Twitter etc) and old e-mails. ALL of my important and main accounts have 2FA enabled and are protected by a strong password (I use KeePass to generate a 35 character lenght password with numbers, capital letters, special symbols etc, each account uses a unique password) I also NEVER use my real name on any account and NEVER post any pictures of myself (I rarely take pictures of stuff if anything)
iot/smart devices: aside from my smartphone, I don't have any IOT/smart device as I honestly see no need for them (and most of them are WAY too expensive on third-world countries)
files: I constatly backup all of my files (each two weeks) on encrypted flash drivers, I also use BleachBit for temporary data cleaning and data/file shredding. I also use Syncthing as a substitute to stuff like Google Drive.
Future plans:
learn to self-host and self-host an e-mail/NextCloud (and maybe even a VPN)
find something like BurneHushed but FOSS (if you know any please let me know)
So, how is it? anything that I should do that I'm probably not doing?
submitted by StunningDistrust to privacytoolsIO [link] [comments]

I've failed... My dreams of confirming a low fee TXN won't come true this month.

For a bit of fun, I created a low fee TXN (493 s/kB) to see if pool would confirm below 1000 s/kB. Seems the answer is usually "very very unlikely". My economic game theory went something like this...
Given the option to mine no TXNs or to mine TXNs below 1000 s/kB, miners would always choose anything over nothing
The cost of including a TXN, is practical nothing in comparison to the cost of hashing a block. The TXNs are just included in the root of the block header, so not really a heavy expense. Perhaps I'm overthinking it. Maybe most miners just use the default settings and don't waste a moments thought on anything beyond it. Maybe there is an economic factor I'm missing.
Ohh well. At least PR #13990 is staying active... Guess I'm just impatient.
submitted by brianddk to Bitcoin [link] [comments]

Technical: The Path to Taproot Activation

Taproot! Everybody wants to have it, somebody wants to make it, nobody knows how to get it!
(If you are asking why everybody wants it, see: Technical: Taproot: Why Activate?)
(Pedants: I mostly elide over lockin times)
Briefly, Taproot is that neat new thing that gets us:
So yes, let's activate taproot!

The SegWit Wars

The biggest problem with activating Taproot is PTSD from the previous softfork, SegWit. Pieter Wuille, one of the authors of the current Taproot proposal, has consistently held the position that he will not discuss activation, and will accept whatever activation process is imposed on Taproot. Other developers have expressed similar opinions.
So what happened with SegWit activation that was so traumatic? SegWit used the BIP9 activation method. Let's dive into BIP9!

BIP9 Miner-Activated Soft Fork

Basically, BIP9 has a bunch of parameters:
Now there are other parameters (name, starttime) but they are not anywhere near as important as the above two.
A number that is not a parameter, is 95%. Basically, activation of a BIP9 softfork is considered as actually succeeding if at least 95% of blocks in the last 2 weeks had the specified bit in the nVersion set. If less than 95% had this bit set before the timeout, then the upgrade fails and never goes into the network. This is not a parameter: it is a constant defined by BIP9, and developers using BIP9 activation cannot change this.
So, first some simple questions and their answers:

The Great Battles of the SegWit Wars

SegWit not only fixed transaction malleability, it also created a practical softforkable blocksize increase that also rebalanced weights so that the cost of spending a UTXO is about the same as the cost of creating UTXOs (and spending UTXOs is "better" since it limits the size of the UTXO set that every fullnode has to maintain).
So SegWit was written, the activation was decided to be BIP9, and then.... miner signalling stalled at below 75%.
Thus were the Great SegWit Wars started.

BIP9 Feature Hostage

If you are a miner with at least 5% global hashpower, you can hold a BIP9-activated softfork hostage.
You might even secretly want the softfork to actually push through. But you might want to extract concession from the users and the developers. Like removing the halvening. Or raising or even removing the block size caps (which helps larger miners more than smaller miners, making it easier to become a bigger fish that eats all the smaller fishes). Or whatever.
With BIP9, you can hold the softfork hostage. You just hold out and refuse to signal. You tell everyone you will signal, if and only if certain concessions are given to you.
This ability by miners to hold a feature hostage was enabled because of the miner-exit allowed by the timeout on BIP9. Prior to that, miners were considered little more than expendable security guards, paid for the risk they take to secure the network, but not special in the grand scheme of Bitcoin.

Covert ASICBoost

ASICBoost was a novel way of optimizing SHA256 mining, by taking advantage of the structure of the 80-byte header that is hashed in order to perform proof-of-work. The details of ASICBoost are out-of-scope here but you can read about it elsewhere
Here is a short summary of the two types of ASICBoost, relevant to the activation discussion.
Now, "overt" means "obvious", while "covert" means hidden. Overt ASICBoost is obvious because nVersion bits that are not currently in use for BIP9 activations are usually 0 by default, so setting those bits to 1 makes it obvious that you are doing something weird (namely, Overt ASICBoost). Covert ASICBoost is non-obvious because the order of transactions in a block are up to the miner anyway, so the miner rearranging the transactions in order to get lower power consumption is not going to be detected.
Unfortunately, while Overt ASICBoost was compatible with SegWit, Covert ASICBoost was not. This is because, pre-SegWit, only the block header Merkle tree committed to the transaction ordering. However, with SegWit, another Merkle tree exists, which commits to transaction ordering as well. Covert ASICBoost would require more computation to manipulate two Merkle trees, obviating the power benefits of Covert ASICBoost anyway.
Now, miners want to use ASICBoost (indeed, about 60->70% of current miners probably use the Overt ASICBoost nowadays; if you have a Bitcoin fullnode running you will see the logs with lots of "60 of last 100 blocks had unexpected versions" which is exactly what you would see with the nVersion manipulation that Overt ASICBoost does). But remember: ASICBoost was, at around the time, a novel improvement. Not all miners had ASICBoost hardware. Those who did, did not want it known that they had ASICBoost hardware, and wanted to do Covert ASICBoost!
But Covert ASICBoost is incompatible with SegWit, because SegWit actually has two Merkle trees of transaction data, and Covert ASICBoost works by fudging around with transaction ordering in a block, and recomputing two Merkle Trees is more expensive than recomputing just one (and loses the ASICBoost advantage).
Of course, those miners that wanted Covert ASICBoost did not want to openly admit that they had ASICBoost hardware, they wanted to keep their advantage secret because miners are strongly competitive in a very tight market. And doing ASICBoost Covertly was just the ticket, but they could not work post-SegWit.
Fortunately, due to the BIP9 activation process, they could hold SegWit hostage while covertly taking advantage of Covert ASICBoost!

UASF: BIP148 and BIP8

When the incompatibility between Covert ASICBoost and SegWit was realized, still, activation of SegWit stalled, and miners were still not openly claiming that ASICBoost was related to non-activation of SegWit.
Eventually, a new proposal was created: BIP148. With this rule, 3 months before the end of the SegWit timeout, nodes would reject blocks that did not signal SegWit. Thus, 3 months before SegWit timeout, BIP148 would force activation of SegWit.
This proposal was not accepted by Bitcoin Core, due to the shortening of the timeout (it effectively times out 3 months before the initial SegWit timeout). Instead, a fork of Bitcoin Core was created which added the patch to comply with BIP148. This was claimed as a User Activated Soft Fork, UASF, since users could freely download the alternate fork rather than sticking with the developers of Bitcoin Core.
Now, BIP148 effectively is just a BIP9 activation, except at its (earlier) timeout, the new rules would be activated anyway (instead of the BIP9-mandated behavior that the upgrade is cancelled at the end of the timeout).
BIP148 was actually inspired by the BIP8 proposal (the link here is a historical version; BIP8 has been updated recently, precisely in preparation for Taproot activation). BIP8 is basically BIP9, but at the end of timeout, the softfork is activated anyway rather than cancelled.
This removed the ability of miners to hold the softfork hostage. At best, they can delay the activation, but not stop it entirely by holding out as in BIP9.
Of course, this implies risk that not all miners have upgraded before activation, leading to possible losses for SPV users, as well as again re-pressuring miners to signal activation, possibly without the miners actually upgrading their software to properly impose the new softfork rules.

BIP91, SegWit2X, and The Aftermath

BIP148 inspired countermeasures, possibly from the Covert ASiCBoost miners, possibly from concerned users who wanted to offer concessions to miners. To this day, the common name for BIP148 - UASF - remains an emotionally-charged rallying cry for parts of the Bitcoin community.
One of these was SegWit2X. This was brokered in a deal between some Bitcoin personalities at a conference in New York, and thus part of the so-called "New York Agreement" or NYA, another emotionally-charged acronym.
The text of the NYA was basically:
  1. Set up a new activation threshold at 80% signalled at bit 4 (vs bit 1 for SegWit).
    • When this 80% signalling was reached, miners would require that bit 1 for SegWit be signalled to achive the 95% activation needed for SegWit.
  2. If the bit 4 signalling reached 80%, increase the block weight limit from the SegWit 4000000 to the SegWit2X 8000000, 6 months after bit 1 activation.
The first item above was coded in BIP91.
Unfortunately, if you read the BIP91, independently of NYA, you might come to the conclusion that BIP91 was only about lowering the threshold to 80%. In particular, BIP91 never mentions anything about the second point above, it never mentions that bit 4 80% threshold would also signal for a later hardfork increase in weight limit.
Because of this, even though there are claims that NYA (SegWit2X) reached 80% dominance, a close reading of BIP91 shows that the 80% dominance was only for SegWit activation, without necessarily a later 2x capacity hardfork (SegWit2X).
This ambiguity of bit 4 (NYA says it includes a 2x capacity hardfork, BIP91 says it does not) has continued to be a thorn in blocksize debates later. Economically speaking, Bitcoin futures between SegWit and SegWit2X showed strong economic dominance in favor of SegWit (SegWit2X futures were traded at a fraction in value of SegWit futures: I personally made a tidy but small amount of money betting against SegWit2X in the futures market), so suggesting that NYA achieved 80% dominance even in mining is laughable, but the NYA text that ties bit 4 to SegWit2X still exists.
Historically, BIP91 triggered which caused SegWit to activate before the BIP148 shorter timeout. BIP148 proponents continue to hold this day that it was the BIP148 shorter timeout and no-compromises-activate-on-August-1 that made miners flock to BIP91 as a face-saving tactic that actually removed the second clause of NYA. NYA supporters keep pointing to the bit 4 text in the NYA and the historical activation of BIP91 as a failed promise by Bitcoin developers.

Taproot Activation Proposals

There are two primary proposals I can see for Taproot activation:
  1. BIP8.
  2. Modern Softfork Activation.
We have discussed BIP8: roughly, it has bit and timeout, if 95% of miners signal bit it activates, at the end of timeout it activates. (EDIT: BIP8 has had recent updates: at the end of timeout it can now activate or fail. For the most part, in the below text "BIP8", means BIP8-and-activate-at-timeout, and "BIP9" means BIP8-and-fail-at-timeout)
So let's take a look at Modern Softfork Activation!

Modern Softfork Activation

This is a more complex activation method, composed of BIP9 and BIP8 as supcomponents.
  1. First have a 12-month BIP9 (fail at timeout).
  2. If the above fails to activate, have a 6-month discussion period during which users and developers and miners discuss whether to continue to step 3.
  3. Have a 24-month BIP8 (activate at timeout).
The total above is 42 months, if you are counting: 3.5 years worst-case activation.
The logic here is that if there are no problems, BIP9 will work just fine anyway. And if there are problems, the 6-month period should weed it out. Finally, miners cannot hold the feature hostage since the 24-month BIP8 period will exist anyway.

PSA: Being Resilient to Upgrades

Software is very birttle.
Anyone who has been using software for a long time has experienced something like this:
  1. You hear a new version of your favorite software has a nice new feature.
  2. Excited, you install the new version.
  3. You find that the new version has subtle incompatibilities with your current workflow.
  4. You are sad and downgrade to the older version.
  5. You find out that the new version has changed your files in incompatible ways that the old version cannot work with anymore.
  6. You tearfully reinstall the newer version and figure out how to get your lost productivity now that you have to adapt to a new workflow
If you are a technically-competent user, you might codify your workflow into a bunch of programs. And then you upgrade one of the external pieces of software you are using, and find that it has a subtle incompatibility with your current workflow which is based on a bunch of simple programs you wrote yourself. And if those simple programs are used as the basis of some important production system, you hve just screwed up because you upgraded software on an important production system.
And well, one of the issues with new softfork activation is that if not enough people (users and miners) upgrade to the newest Bitcoin software, the security of the new softfork rules are at risk.
Upgrading software of any kind is always a risk, and the more software you build on top of the software-being-upgraded, the greater you risk your tower of software collapsing while you change its foundations.
So if you have some complex Bitcoin-manipulating system with Bitcoin somewhere at the foundations, consider running two Bitcoin nodes:
  1. One is a "stable-version" Bitcoin node. Once it has synced, set it up to connect=x.x.x.x to the second node below (so that your ISP bandwidth is only spent on the second node). Use this node to run all your software: it's a stable version that you don't change for long periods of time. Enable txiindex, disable pruning, whatever your software needs.
  2. The other is an "always-up-to-date" Bitcoin Node. Keep its stoarge down with pruning (initially sync it off the "stable-version" node). You can't use blocksonly if your "stable-version" node needs to send transactions, but otherwise this "always-up-to-date" Bitcoin node can be kept as a low-resource node, so you can run both nodes in the same machine.
When a new Bitcoin version comes up, you just upgrade the "always-up-to-date" Bitcoin node. This protects you if a future softfork activates, you will only receive valid Bitcoin blocks and transactions. Since this node has nothing running on top of it, it is just a special peer of the "stable-version" node, any software incompatibilities with your system software do not exist.
Your "stable-version" Bitcoin node remains the same version until you are ready to actually upgrade this node and are prepared to rewrite most of the software you have running on top of it due to version compatibility problems.
When upgrading the "always-up-to-date", you can bring it down safely and then start it later. Your "stable-version" wil keep running, disconnected from the network, but otherwise still available for whatever queries. You do need some system to stop the "always-up-to-date" node if for any reason the "stable-version" goes down (otherwisee if the "always-up-to-date" advances its pruning window past what your "stable-version" has, the "stable-version" cannot sync afterwards), but if you are technically competent enough that you need to do this, you are technically competent enough to write such a trivial monitor program (EDIT: gmax notes you can adjust the pruning window by RPC commands to help with this as well).
This recommendation is from gmaxwell on IRC, by the way.
submitted by almkglor to Bitcoin [link] [comments]

Mainnet project: an important change. If you are a donor, please read.

Hi everybody.
It has been one week since the mainnet project got the funding and I have an important update to make.
A little bit about the progress: I've found a wonderful developer, who is helping with the library, so it is starting to take some shape. I'm ironing out our REST API, got some useful feedback, continuing to do so. About 0.17% of the total funding spent so far.
The important update though is that I have decided to take the development and spending private, instead of public. Before I explain what that means and why, I understand that it might upset some donors. So, if you have pledged any amount and disagree with my change for any reason - please contact me (DM, or [email protected]) and I'll refund your pledge completely, no questions asked.
(Please sign any message using the address that you used to prove that you sent the funds, see the list of donors here to find your pledge and the link the the funding donation to find which address you sent from).
If more than 50% of pledges ask for money back, I'll just return everything to everybody in full and we'll consider the project cancelled. At that point anyone willing to take on the project (via a new Flipstarter or something), I'll donate the domain to them. Everything that is done so far is MIT licensed, so anyone is free to take it at any moment.
Let the market decide!
I've got to tell you that I'm a bit disappointed with our progress so far. I expected a lot of people willing to earn some money, but I've got only 4 relevant developers, 3 of them passed a very simple test, only one is actually doing anything.
This was not expected by me, when I had promised to work publicly and with BCH developers.
Another problem is that I have a certain vision that I described in the project description. In addition to that vision there is also a lot of experience talking to read.cash users. A lot of them are in countries with very bad Internet (2G, few kilobytes per second), using very old Android phones (10+ years, the size of an iPhone 4 and the speed half of that of iPhone 4).. And I also really hope that someday we will have 100MB blocks, 1GB, 1TB blocks. But now I'm tied in arguments with BCH developers who argue that many current solutions are good enough already and we don't need to change them - just build on top of a few convoluted and complex protocols, just download a block when needed (again, Africa, 2G, 100MB blocks), just download 640,000 block headers, listen to the whole mempool (with 1TB block we'll have 1TB mempool) - it's fine, blocks are tiny... Just send a few queries (now)... Just download a mempool fully.
(To those of you that know what this is about, please don't name names, I'm not here to play the blame game, everybody is entitled to their own opinions. It's fine.)
If your wallet becomes too big - create a new one. It's fine.
Sidenote: my read.cash wallet that gets the fees takes a few hours to open now, and it's barely 9 months old! I find current solutions unacceptable, I want my wallet to open up immediately and handle 100MB blocks as well as 60KB blocks.
I don't want to develop for tiny blocks or tiny wallets that need to be changed every few months.. I want huge blocks! I don't want mainnet to be as brittle as to break at the first sight of success.
A few of these discussions got me really tired and I have no leverage on these guys. They have money now, they have their vision, I have mine, described on the site, they don't want to do it my way. I didn't collect the funds to do it their way.
Yet I have made a commitment to work with them.
This is very tiresome. I feel like I've got myself into a trap - I have to work with these people, they don't want to work on my stuff.
This is just stupid.
One more thing is that now that I have Slack - I'm caught in endless private discussions of people trying to sell me their vision of how stuff should be done or questions about me or read.cash... I didn't sign up for that, I barely have any time to do the work, I don't have time for this, sorry.
Change #1: Private development
Having said that, I'm moving the project to private development.
Frankly, all I care about is to get this project done. I added an additional burden on myself to be do the public development. And it's tiresome.
The plan would be to hire some outside developers, using regular contracts, so that they don't have THEIR ideas on how to do the project and they'll just do what I described.
I think everybody cares about the end result - library working, document being written, etc...
Change #2: Private spending
Hired developers also means salaries. When people (in the real world) know salaries of other people, it leads to conflicts. I went through this experiment (public salaries) once in my life, I won't go through that again. Even people knowing your budget become a problem, since they start to bargain with you. (Again, we're talking about outside developers, they are not interested in BCH success, they are interested in getting as much money as possible)
By private spending I mean that I'll post periodically how much is done and how much funds is approximately left, but no details on who got what for what. Right now there's 99.83% funds left.
Some of you might see it as a money grab or something else - I can't blame you, but I'd rather see this project cancelled by market forces than drown in endless fights about why we should do exactly nothing or their idea, hope for small blocks and use what we have no matter how convoluted or hard it is, or why somebody's hourly rate should be bigger than that guy's.
Will this lead to everyone cancelling their donations? It sure could! It's voluntary funding after all, I can't force anyone to love what I do or how I do it.
If you donated and want a refund to your original address - just ping me.
When this post is 48 hours old, if more than 50% pledges remain, the project will move on as described above. If 50%+ cancels - everybody gets refunds to their original addresses.
submitted by readcash to btc [link] [comments]

[ Bitcoin ] Technical: Taproot: Why Activate?

Topic originally posted in Bitcoin by almkglor [link]
This is a follow-up on https://old.reddit.com/Bitcoin/comments/hqzp14/technical_the_path_to_taproot_activation/
Taproot! Everybody wants it!! But... you might ask yourself: sure, everybody else wants it, but why would I, sovereign Bitcoin HODLer, want it? Surely I can be better than everybody else because I swapped XXX fiat for Bitcoin unlike all those nocoiners?
And it is important for you to know the reasons why you, o sovereign Bitcoiner, would want Taproot activated. After all, your nodes (or the nodes your wallets use, which if you are SPV, you hopefully can pester to your wallet vendoimplementor about) need to be upgraded in order for Taproot activation to actually succeed instead of becoming a hot sticky mess.
First, let's consider some principles of Bitcoin.
I'm sure most of us here would agree that the above are very important principles of Bitcoin and that these are principles we would not be willing to remove. If anything, we would want those principles strengthened (especially the last one, financial privacy, which current Bitcoin is only sporadically strong with: you can get privacy, it just requires effort to do so).
So, how does Taproot affect those principles?

Taproot and Your /Coins

Most HODLers probably HODL their coins in singlesig addresses. Sadly, switching to Taproot would do very little for you (it gives a mild discount at spend time, at the cost of a mild increase in fee at receive time (paid by whoever sends to you, so if it's a self-send from a P2PKH or bech32 address, you pay for this); mostly a wash).
(technical details: a Taproot output is 1 version byte + 32 byte public key, while a P2WPKH (bech32 singlesig) output is 1 version byte + 20 byte public key hash, so the Taproot output spends 12 bytes more; spending from a P2WPKH requires revealing a 32-byte public key later, which is not needed with Taproot, and Taproot signatures are about 9 bytes smaller than P2WPKH signatures, but the 32 bytes plus 9 bytes is divided by 4 because of the witness discount, so it saves about 11 bytes; mostly a wash, it increases blockweight by about 1 virtual byte, 4 weight for each Taproot-output-input, compared to P2WPKH-output-input).
However, as your HODLings grow in value, you might start wondering if multisignature k-of-n setups might be better for the security of your savings. And it is in multisignature that Taproot starts to give benefits!
Taproot switches to using Schnorr signing scheme. Schnorr makes key aggregation -- constructing a single public key from multiple public keys -- almost as trivial as adding numbers together. "Almost" because it involves some fairly advanced math instead of simple boring number adding, but hey when was the last time you added up your grocery list prices by hand huh?
With current P2SH and P2WSH multisignature schemes, if you have a 2-of-3 setup, then to spend, you need to provide two different signatures from two different public keys. With Taproot, you can create, using special moon math, a single public key that represents your 2-of-3 setup. Then you just put two of your devices together, have them communicate to each other (this can be done airgapped, in theory, by sending QR codes: the software to do this is not even being built yet, but that's because Taproot hasn't activated yet!), and they will make a single signature to authorize any spend from your 2-of-3 address. That's 73 witness bytes -- 18.25 virtual bytes -- of signatures you save!
And if you decide that your current setup with 1-of-1 P2PKH / P2WPKH addresses is just fine as-is: well, that's the whole point of a softfork: backwards-compatibility; you can receive from Taproot users just fine, and once your wallet is updated for Taproot-sending support, you can send to Taproot users just fine as well!
(P2WPKH and P2WSH -- SegWit v0 -- addresses start with bc1q; Taproot -- SegWit v1 --- addresses start with bc1p, in case you wanted to know the difference; in bech32 q is 0, p is 1)
Now how about HODLers who keep all, or some, of their coins on custodial services? Well, any custodial service worth its salt would be doing at least 2-of-3, or probably something even bigger, like 11-of-15. So your custodial service, if it switched to using Taproot internally, could save a lot more (imagine an 11-of-15 getting reduced from 11 signatures to just 1!), which --- we can only hope! --- should translate to lower fees and better customer service from your custodial service!
So I think we can say, very accurately, that the Bitcoin principle --- that YOU are in control of your money --- can only be helped by Taproot (if you are doing multisignature), and, because P2PKH and P2WPKH remain validly-usable addresses in a Taproot future, will not be harmed by Taproot. Its benefit to this principle might be small (it mostly only benefits multisignature users) but since it has no drawbacks with this (i.e. singlesig users can continue to use P2WPKH and P2PKH still) this is still a nice, tidy win!
(even singlesig users get a minor benefit, in that multisig users will now reduce their blockchain space footprint, so that fees can be kept low for everybody; so for example even if you have your single set of private keys engraved on titanium plates sealed in an airtight box stored in a safe buried in a desert protected by angry nomads riding giant sandworms because you're the frickin' Kwisatz Haderach, you still gain some benefit from Taproot)
And here's the important part: if P2PKH/P2WPKH is working perfectly fine with you and you decide to never use Taproot yourself, Taproot will not affect you detrimentally. First do no harm!

Taproot and Your Contracts

No one is an island, no one lives alone. Give and you shall receive. You know: by trading with other people, you can gain expertise in some obscure little necessity of the world (and greatly increase your productivity in that little field), and then trade the products of your expertise for necessities other people have created, all of you thereby gaining gains from trade.
So, contracts, which are basically enforceable agreements that facilitate trading with people who you do not personally know and therefore might not trust.
Let's start with a simple example. You want to buy some gewgaws from somebody. But you don't know them personally. The seller wants the money, you want their gewgaws, but because of the lack of trust (you don't know them!! what if they're scammers??) neither of you can benefit from gains from trade.
However, suppose both of you know of some entity that both of you trust. That entity can act as a trusted escrow. The entity provides you security: this enables the trade, allowing both of you to get gains from trade.
In Bitcoin-land, this can be implemented as a 2-of-3 multisignature. The three signatories in the multisgnature would be you, the gewgaw seller, and the escrow. You put the payment for the gewgaws into this 2-of-3 multisignature address.
Now, suppose it turns out neither of you are scammers (whaaaat!). You receive the gewgaws just fine and you're willing to pay up for them. Then you and the gewgaw seller just sign a transaction --- you and the gewgaw seller are 2, sufficient to trigger the 2-of-3 --- that spends from the 2-of-3 address to a singlesig the gewgaw seller wants (or whatever address the gewgaw seller wants).
But suppose some problem arises. The seller gave you gawgews instead of gewgaws. Or you decided to keep the gewgaws but not sign the transaction to release the funds to the seller. In either case, the escrow is notified, and if it can sign with you to refund the funds back to you (if the seller was a scammer) or it can sign with the seller to forward the funds to the seller (if you were a scammer).
Taproot helps with this: like mentioned above, it allows multisignature setups to produce only one signature, reducing blockchain space usage, and thus making contracts --- which require multiple people, by definition, you don't make contracts with yourself --- is made cheaper (which we hope enables more of these setups to happen for more gains from trade for everyone, also, moon and lambos).
(technology-wise, it's easier to make an n-of-n than a k-of-n, making a k-of-n would require a complex setup involving a long ritual with many communication rounds between the n participants, but an n-of-n can be done trivially with some moon math. You can, however, make what is effectively a 2-of-3 by using a three-branch SCRIPT: either 2-of-2 of you and seller, OR 2-of-2 of you and escrow, OR 2-of-2 of escrow and seller. Fortunately, Taproot adds a facility to embed a SCRIPT inside a public key, so you can have a 2-of-2 Taprooted address (between you and seller) with a SCRIPT branch that can instead be spent with 2-of-2 (you + escrow) OR 2-of-2 (seller + escrow), which implements the three-branched SCRIPT above. If neither of you are scammers (hopefully the common case) then you both sign using your keys and never have to contact the escrow, since you are just using the escrow public key without coordinating with them (because n-of-n is trivial but k-of-n requires setup with communication rounds), so in the "best case" where both of you are honest traders, you also get a privacy boost, in that the escrow never learns you have been trading on gewgaws, I mean ewww, gawgews are much better than gewgaws and therefore I now judge you for being a gewgaw enthusiast, you filthy gewgawer).

Taproot and Your Contracts, Part 2: Cryptographic Boogaloo

Now suppose you want to buy some data instead of things. For example, maybe you have some closed-source software in trial mode installed, and want to pay the developer for the full version. You want to pay for an activation code.
This can be done, today, by using an HTLC. The developer tells you the hash of the activation code. You pay to an HTLC, paying out to the developer if it reveals the preimage (the activation code), or refunding the money back to you after a pre-agreed timeout. If the developer claims the funds, it has to reveal the preimage, which is the activation code, and you can now activate your software. If the developer does not claim the funds by the timeout, you get refunded.
And you can do that, with HTLCs, today.
Of course, HTLCs do have problems:
Fortunately, with Schnorr (which is enabled by Taproot), we can now use the Scriptless Script constuction by Andrew Poelstra. This Scriptless Script allows a new construction, the PTLC or Pointlocked Timelocked Contract. Instead of hashes and preimages, just replace "hash" with "point" and "preimage" with "scalar".
Or as you might know them: "point" is really "public key" and "scalar" is really a "private key". What a PTLC does is that, given a particular public key, the pointlocked branch can be spent only if the spender reveals the private key of the given private key to you.
Another nice thing with PTLCs is that they are deniable. What appears onchain is just a single 2-of-2 signature between you and the developemanufacturer. It's like a magic trick. This signature has no special watermarks, it's a perfectly normal signature (the pledge). However, from this signature, plus some datta given to you by the developemanufacturer (known as the adaptor signature) you can derive the private key of a particular public key you both agree on (the turn). Anyone scraping the blockchain will just see signatures that look just like every other signature, and as long as nobody manages to hack you and get a copy of the adaptor signature or the private key, they cannot get the private key behind the public key (point) that the pointlocked branch needs (the prestige).
(Just to be clear, the public key you are getting the private key from, is distinct from the public key that the developemanufacturer will use for its funds. The activation key is different from the developer's onchain Bitcoin key, and it is the activation key whose private key you will be learning, not the developer's/manufacturer's onchain Bitcoin key).
So:
Taproot lets PTLCs exist onchain because they enable Schnorr, which is a requirement of PTLCs / Scriptless Script.
(technology-wise, take note that Scriptless Script works only for the "pointlocked" branch of the contract; you need normal Script, or a pre-signed nLockTimed transaction, for the "timelocked" branch. Since Taproot can embed a script, you can have the Taproot pubkey be a 2-of-2 to implement the Scriptless Script "pointlocked" branch, then have a hidden script that lets you recover the funds with an OP_CHECKLOCKTIMEVERIFY after the timeout if the seller does not claim the funds.)

Quantum Quibbles!

Now if you were really paying attention, you might have noticed this parenthetical:
(technical details: a Taproot output is 1 version byte + 32 byte public key, while a P2WPKH (bech32 singlesig) output is 1 version byte + 20 byte public key hash...)
So wait, Taproot uses raw 32-byte public keys, and not public key hashes? Isn't that more quantum-vulnerable??
Well, in theory yes. In practice, they probably are not.
It's not that hashes can be broken by quantum computes --- they're still not. Instead, you have to look at how you spend from a P2WPKH/P2PKH pay-to-public-key-hash.
When you spend from a P2PKH / P2WPKH, you have to reveal the public key. Then Bitcoin hashes it and checks if this matches with the public-key-hash, and only then actually validates the signature for that public key.
So an unconfirmed transaction, floating in the mempools of nodes globally, will show, in plain sight for everyone to see, your public key.
(public keys should be public, that's why they're called public keys, LOL)
And if quantum computers are fast enough to be of concern, then they are probably fast enough that, in the several minutes to several hours from broadcast to confirmation, they have already cracked the public key that is openly broadcast with your transaction. The owner of the quantum computer can now replace your unconfirmed transaction with one that pays the funds to itself. Even if you did not opt-in RBF, miners are still incentivized to support RBF on RBF-disabled transactions.
So the extra hash is not as significant a protection against quantum computers as you might think. Instead, the extra hash-and-compare needed is just extra validation effort.
Further, if you have ever, in the past, spent from the address, then there exists already a transaction indelibly stored on the blockchain, openly displaying the public key from which quantum computers can derive the private key. So those are still vulnerable to quantum computers.
For the most part, the cryptographers behind Taproot (and Bitcoin Core) are of the opinion that quantum computers capable of cracking Bitcoin pubkeys are unlikely to appear within a decade or two.
So:
For now, the homomorphic and linear properties of elliptic curve cryptography provide a lot of benefits --- particularly the linearity property is what enables Scriptless Script and simple multisignature (i.e. multisignatures that are just 1 signature onchain). So it might be a good idea to take advantage of them now while we are still fairly safe against quantum computers. It seems likely that quantum-safe signature schemes are nonlinear (thus losing these advantages).

Summary

I Wanna Be The Taprooter!

So, do you want to help activate Taproot? Here's what you, mister sovereign Bitcoin HODLer, can do!

But I Hate Taproot!!

That's fine!

Discussions About Taproot Activation

almkglor your post has been copied because one or more comments in this topic have been removed. This copy will preserve unmoderated topic. If you would like to opt-out, please send a message using [this link].
[deleted comment]
[deleted comment]
[deleted comment]
submitted by anticensor_bot to u/anticensor_bot [link] [comments]

A new whitepaper analysing the performance and scalability of the Streamr pub/sub messaging Network is now available. Take a look at some of the fascinating key results in this introductory blog

A new whitepaper analysing the performance and scalability of the Streamr pub/sub messaging Network is now available. Take a look at some of the fascinating key results in this introductory blog

Streamr Network: Performance and Scalability Whitepaper


https://preview.redd.it/bstqyn43x4j51.png?width=2600&format=png&auto=webp&s=81683ca6303ab84ab898c096345464111d674ee5
The Corea milestone of the Streamr Network went live in late 2019. Since then a few people in the team have been working on an academic whitepaper to describe its design principles, position it with respect to prior art, and prove certain properties it has. The paper is now ready, and it has been submitted to the IEEE Access journal for peer review. It is also now published on the new Papers section on the project website. In this blog, I’ll introduce the paper and explain its key results. All the figures presented in this post are from the paper.
The reasons for doing this research and writing this paper were simple: many prospective users of the Network, especially more serious ones such as enterprises, ask questions like ‘how does it scale?’, ‘why does it scale?’, ‘what is the latency in the network?’, and ‘how much bandwidth is consumed?’. While some answers could be provided before, the Network in its currently deployed form is still small-scale and can’t really show a track record of scalability for example, so there was clearly a need to produce some in-depth material about the structure of the Network and its performance at large, global scale. The paper answers these questions.
Another reason is that decentralized peer-to-peer networks have experienced a new renaissance due to the rise in blockchain networks. Peer-to-peer pub/sub networks were a hot research topic in the early 2000s, but not many real-world implementations were ever created. Today, most blockchain networks use methods from that era under the hood to disseminate block headers, transactions, and other events important for them to function. Other megatrends like IoT and social media are also creating demand for new kinds of scalable message transport layers.

The latency vs. bandwidth tradeoff

The current Streamr Network uses regular random graphs as stream topologies. ‘Regular’ here means that nodes connect to a fixed number of other nodes that publish or subscribe to the same stream, and ‘random’ means that those nodes are selected randomly.
Random connections can of course mean that absurd routes get formed occasionally, for example a data point might travel from Germany to France via the US. But random graphs have been studied extensively in the academic literature, and their properties are not nearly as bad as the above example sounds — such graphs are actually quite good! Data always takes multiple routes in the network, and only the fastest route counts. The less-than-optimal routes are there for redundancy, and redundancy is good, because it improves security and churn tolerance.
There is an important parameter called node degree, which is the fixed number of nodes to which each node in a topology connects. A higher node degree means more duplication and thus more bandwidth consumption for each node, but it also means that fast routes are more likely to form. It’s a tradeoff; better latency can be traded for worse bandwidth consumption. In the following section, we’ll go deeper into analyzing this relationship.

Network diameter scales logarithmically

One useful metric to estimate the behavior of latency is the network diameter, which is the number of hops on the shortest path between the most distant pair of nodes in the network (i.e. the “longest shortest path”. The below plot shows how the network diameter behaves depending on node degree and number of nodes.

Network diameter
We can see that the network diameter increases logarithmically (very slowly), and a higher node degree ‘flattens the curve’. This is a property of random regular graphs, and this is very good — growing from 10,000 nodes to 100,000 nodes only increases the diameter by a few hops! To analyse the effect of the node degree further, we can plot the maximum network diameter using various node degrees:
Network diameter in network of 100 000 nodes
We can see that there are diminishing returns for increasing the node degree. On the other hand, the penalty (number of duplicates, i.e. bandwidth consumption), increases linearly with node degree:

Number of duplicates received by the non-publisher nodes
In the Streamr Network, each stream forms its own separate overlay network and can even have a custom node degree. This allows the owner of the stream to configure their preferred latency/bandwidth balance (imagine such a slider control in the Streamr Core UI). However, finding a good default value is important. From this analysis, we can conclude that:
  • The logarithmic behavior of network diameter leads us to hope that latency might behave logarithmically too, but since the number of hops is not the same as latency (in milliseconds), the scalability needs to be confirmed in the real world (see next section).
  • A node degree of 4 yields good latency/bandwidth balance, and we have selected this as the default value in the Streamr Network. This value is also used in all the real-world experiments described in the next section.
It’s worth noting that in such a network, the bandwidth requirement for publishers is determined by the node degree and not the number of subscribers. With a node degree 4 and a million subscribers, the publisher only uploads 4 copies of a data point, and the million subscribing nodes share the work of distributing the message among themselves. In contrast, a centralized data broker would need to push out a million copies.

Latency scales logarithmically

To see if actual latency scales logarithmically in real-world conditions, we ran large numbers of nodes in 16 different Amazon AWS data centers around the world. We ran experiments with network sizes between 32 to 2048 nodes. Each node published messages to the network, and we measured how long it took for the other nodes to get the message. The experiment was repeated 10 times for each network size.
The below image displays one of the key results of the paper. It shows a CDF (cumulative distribution function) of the measured latencies across all experiments. The y-axis runs from 0 to 1, i.e. 0% to 100%.
CDF of message propagation delay
From this graph we can easily read things like: in a 32 nodes network (blue line), 50% of message deliveries happened within 150 ms globally, and all messages were delivered in around 250 ms. In the largest network of 2048 nodes (pink line), 99% of deliveries happened within 362 ms globally.
To put these results in context, PubNub, a centralized message brokering service, promises to deliver messages within 250 ms — and that’s a centralized service! Decentralization comes with unquestionable benefits (no vendor lock-in, no trust required, network effects, etc.), but if such protocols are inferior in terms of performance or cost, they won’t get adopted. It’s pretty safe to say that the Streamr Network is on par with centralized services even when it comes to latency, which is usually the Achilles’ heel of P2P networks (think of how slow blockchains are!). And the Network will only get better with time.
Then we tackled the big question: does the latency behave logarithmically?
Mean message propagation delay in Amazon experiments
Above, the thick line is the average latency for each network size. From the graph, we can see that the latency grows logarithmically as the network size increases, which means excellent scalability.
The shaded area shows the difference between the best and worst average latencies in each repeat. Here we can see the element of chance at play; due to the randomness in which nodes become neighbours, some topologies are faster than others. Given enough repeats, some near-optimal topologies can be found. The difference between average topologies and the best topologies gives us a glimpse of how much room for optimisation there is, i.e. with a smarter-than-random topology construction, how much improvement is possible (while still staying in the realm of regular graphs)? Out of the observed topologies, the difference between the average and the best observed topology is between 5–13%, so not that much. Other subclasses of graphs, such as irregular graphs, trees, and so on, can of course unlock more room for improvement, but they are different beasts and come with their own disadvantages too.
It’s also worth asking: how much worse is the measured latency compared to the fastest possible latency, i.e. that of a direct connection? While having direct connections between a publisher and subscribers is definitely not scalable, secure, or often even feasible due to firewalls, NATs and such, it’s still worth asking what the latency penalty of peer-to-peer is.

Relative delay penalty in Amazon experiments
As you can see, this plot has the same shape as the previous one, but the y-axis is different. Here, we are showing the relative delay penalty (RDP). It’s the latency in the peer-to-peer network (shown in the previous plot), divided by the latency of a direct connection measured with the ping tool. So a direct connection equals an RDP value of 1, and the measured RDP in the peer-to-peer network is roughly between 2 and 3 in the observed topologies. It increases logarithmically with network size, just like absolute latency.
Again, given that latency is the Achilles’ heel of decentralized systems, that’s not bad at all. It shows that such a network delivers acceptable performance for the vast majority of use cases, only excluding the most latency-sensitive ones, such as online gaming or arbitrage trading. For most other use cases, it doesn’t matter whether it takes 25 or 75 milliseconds to deliver a data point.

Latency is predictable

It’s useful for a messaging system to have consistent and predictable latency. Imagine for example a smart traffic system, where cars can alert each other about dangers on the road. It would be pretty bad if, even minutes after publishing it, some cars still haven’t received the warning. However, such delays easily occur in peer-to-peer networks. Everyone in the crypto space has seen first-hand how plenty of Bitcoin or Ethereum nodes lag even minutes behind the latest chain state.
So we wanted to see whether it would be possible to estimate the latencies in the peer-to-peer network if the topology and the latencies between connected pairs of nodes are known. We applied Dijkstra’s algorithm to compute estimates for average latencies from the input topology data, and compared the estimates to the actual measured average latencies:
Mean message propagation delay in Amazon experiments
We can see that, at least in these experiments, the estimates seemed to provide a lower bound for the actual values, and the average estimation error was 3.5%. The measured value is higher than the estimated one because the estimation only considers network delays, while in reality there is also a little bit of a processing delay at each node.

Conclusion

The research has shown that the Streamr Network can be expected to deliver messages in roughly 150–350 milliseconds worldwide, even at a large scale with thousands of nodes subscribing to a stream. This is on par with centralized message brokers today, showing that the decentralized and peer-to-peer approach is a viable alternative for all but the most latency-sensitive applications.
It’s thrilling to think that by accepting a latency only 2–3 times longer than the latency of an unscalable and insecure direct connecion, applications can interconnect over an open fabric with global scalability, no single point of failure, no vendor lock-in, and no need to trust anyone — all that becomes available out of the box.
In the real-time data space, there are plenty of other aspects to explore, which we didn’t cover in this paper. For example, we did not measure throughput characteristics of network topologies. Different streams are independent, so clearly there’s scalability in the number of streams, and heavy streams can be partitioned, allowing each stream to scale too. Throughput is mainly limited, therefore, by the hardware and network connection used by the network nodes involved in a topology. Measuring the maximum throughput would basically be measuring the hardware as well as the performance of our implemented code. While interesting, this is not a high priority research target at this point in time. And thanks to the redundancy in the network, individual slow nodes do not slow down the whole topology; the data will arrive via faster nodes instead.
Also out of scope for this paper is analysing the costs of running such a network, including the OPEX for publishers and node operators. This is a topic of ongoing research, which we’re currently doing as part of designing the token incentive mechanisms of the Streamr Network, due to be implemented in a later milestone.
I hope that this blog has provided some insight into the fascinating results the team uncovered during this research. For a more in-depth look at the context of this work, and more detail about the research, we invite you to read the full paper.
If you have an interest in network performance and scalability from a developer or enterprise perspective, we will be hosting a talk about this research in the coming weeks, so keep an eye out for more details on the Streamr social media channels. In the meantime, feedback and comments are welcome. Please add a comment to this Reddit thread or email [[email protected]](mailto:[email protected]).
Originally published by. Henri at blog.streamr.network on August 24, 2020.
submitted by thamilton5 to streamr [link] [comments]

Finding SHA256 partial collisions via the Bitcoin blockchain

This is not a cryptocurrency post, per se. I used Bitcoin's blockchain as a vehicle by which to study SHA256.
The phrase "partial collision" is sometimes used to describe a pair of hashes that are "close" to one another. One notion of closeness is that the two hashes should agree on a large number of total bits. Another is that they should agree on a large number of specific (perhaps contiguous) bits.
The goal in Bitcoin mining is essentially (slight simplification here) to find a block header which, when hashed twice with SHA256, has a large number of trailing zeros. (If you have some familiarity with Bitcoin, you may be wondering: doesn't the protocol demand a large number of leading zeros? It does, kind of, but the Bitcoin protocol reverses the normal byte order of SHA256. Perhaps Satoshi interpreted SHA256 output as a byte stream in little endian order. If so, then this is a slightly unfortunate choice, given that SHA256 explicitly uses big endian byte order in its padding scheme.)
Because Bitcoin block header hashes must all have a large number of trailing zeros, they must all agree on a large number of trailing bits. Agreement or disagreement on earlier bits should, heuristically, appear independent and uniform at random. Thus, I figured it should be possible to get some nice SHA256 partial collisions by comparing block header hashes.
First, I looked for hashes that agree on a large number of trailing bits. At present, block header hashes must have about 75 trailing zeros. There are a little over 2^19 blocks in total right now, so we expect to get a further ~38 bits of agreement via a birthday attack. Although this suggests we may find a hash pair agreeing on 75 + 38 = 113 trailing bits, this should be interpreted as a generous upper bound, since early Bitcoin hashes had fewer trailing zeros (as few as 32 at the outset). Still, this gave me a good enough guess to find some partial collisions without being overwhelmed by them. The best result was a hash pair agreeing on their final 108 bits. Hex encodings of the corresponding SHA256 inputs are as follows:
23ca73454a1b981fe51cad0dbd05f4e696795ba67abb28c61aea1a024e5bbeca
a16a8141361ae9834ad171ec28961fc8a951ff1bfc3a9ce0dc2fcdbdfa2ccd35
(I will emphasize that these are hex encodings of the inputs, and are not the inputs themselves.) There were a further 11 hash pairs agreeing on at least 104 trailing bits.
Next, I searched for hashes that agree on a large number of total bits. (In other words, hash pairs with low Hamming distance.) With a little over 2^19 blocks, we have around (2^19 choose 2) ~= 2^37 block pairs. Using binomial distribution statistics, I estimated that it should be possible to find hash pairs that agree on more than 205 bits, but probably not more than 210. Lo and behold, the best result here was a hash pair agreeing on 208 total bits. Hex encodings of the corresponding SHA256 inputs are as follows:
dd9591ff114e8c07be30f0a7998cf09c351d19097766f15a32500ee4f291e7e3
c387edae394b3b9b7becdddcd829c8ed159a32879c156f2e23db73365fde4a94
There were 8 other hash pairs agreeing on at least 206 total bits.
So how interesting are these results, really? One way to assess this is to estimate how difficult it would be to get equivalent results by conventional means. I'm not aware of any clever tricks that find SHA256 collisions (partial or full) faster than brute force. As far as I know, birthday attacks are the best known approach.
To find a hash pair agreeing on their final 108 bits, a birthday attack would require 2^54 time and memory heuristically. Each SHA256 hash consists of 2^5 bytes, so 2^59 is probably a more realistic figure. This is "feasible", but would probably require you to rent outside resources at great expense. Writing code to perform this attack on your PC would be inadvisable. Your computer probably doesn't have the requisite ~600 petabytes of memory, anyway.
The hash pair agreeing on 208 of 256 bits is somewhat more remarkable. By reference to binomial distribution CDFs, a random SHA256 hash pair should agree on at least 208 bits with probability about 2^-81. A birthday attack will cut down on the memory requirement by the normal square root factor - among ~2^41 hashes, you expect that there will be such a pair. But in this case, it is probably necessary to actually compare all hash pairs. The problem of finding the minimum Hamming distance among a set doesn't have obvious shortcuts in general. Thus, a birthday attack performed from scratch would heuristically require about 2^81 hash comparisons, and this is likely not feasible for any entity on Earth right now.
I don't think these results carry any practical implications for SHA256. These partial collisions are in line with what one would expect without exploiting any "weaknesses" of SHA256. If anything, these results are a testament to just how much total work has been put into the Bitcoin blockchain. Realistically, the Bitcoin blockchain will never actually exhibit a SHA256 full collision. Still, I thought these were fun curiosities that were worth sharing.
submitted by KillEveryLastOne to crypto [link] [comments]

Upcoming Major Riecoin 0.20 Upgrade

Upcoming Major Riecoin 0.20 Upgrade
A new major Riecoin upgrade is planned, and includes a hard fork. Below is a summary of the changes so far and the hard fork improvements. More details can be found on BitcoinTalk. Feel free to ask Pttn there or on Discord if you have questions regarding the update.
The first step of this upgrade was to update the base code to Bitcoin’s 0.20, which is done. You can find the experimental code at the Github repository. Experimental binaries can also be downloaded here. Despite their prerelease status, they should work fine, though please backup your wallets if you plan to use 0.20, just in case.
Pool operators and other advanced Riecoin users should start looking into the changes and update their software accordingly, as well as closely follow the Riecoin Core development.
Here is a list of notable changes from 0.16.3.1.
The next step will be the hard fork, in order to improve Riecoin in multiple ways. Here is the list of planned changes.
Once the development is advanced enough, a date will be chosen for the hard fork. Testnet will be hardforked first to ensure the well functioning of the implementation. Stay tuned!
submitted by PttnMe to RieCoin [link] [comments]

Review and Prospect of Crypto Economy-Development and Evolution of Consensus Mechanism (2)

Review and Prospect of Crypto Economy-Development and Evolution of Consensus Mechanism (2)

https://preview.redd.it/a51zsja94db51.png?width=567&format=png&auto=webp&s=99e8080c9e9b1fb5e11cbd70f915f9cb37188f81
Foreword
The consensus mechanism is one of the important elements of the blockchain and the core rule of the normal operation of the distributed ledger. It is mainly used to solve the trust problem between people and determine who is responsible for generating new blocks and maintaining the effective unification of the system in the blockchain system. Thus, it has become an everlasting research hot topic in blockchain.
This article starts with the concept and role of the consensus mechanism. First, it enables the reader to have a preliminary understanding of the consensus mechanism as a whole; then starting with the two armies and the Byzantine general problem, the evolution of the consensus mechanism is introduced in the order of the time when the consensus mechanism is proposed; Then, it briefly introduces the current mainstream consensus mechanism from three aspects of concept, working principle and representative project, and compares the advantages and disadvantages of the mainstream consensus mechanism; finally, it gives suggestions on how to choose a consensus mechanism for blockchain projects and pointed out the possibility of the future development of the consensus mechanism.
Contents
First, concept and function of the consensus mechanism
1.1 Concept: The core rules for the normal operation of distributed ledgers
1.2 Role: Solve the trust problem and decide the generation and maintenance of new blocks
1.2.1 Used to solve the trust problem between people
1.2.2 Used to decide who is responsible for generating new blocks and maintaining effective unity in the blockchain system
1.3 Mainstream model of consensus algorithm
Second, the origin of the consensus mechanism
2.1 The two armies and the Byzantine generals
2.1.1 The two armies problem
2.1.2 The Byzantine generals problem
2.2 Development history of consensus mechanism
2.2.1 Classification of consensus mechanism
2.2.2 Development frontier of consensus mechanism
Third, Common Consensus System
Fourth, Selection of consensus mechanism and summary of current situation
4.1 How to choose a consensus mechanism that suits you
4.1.1 Determine whether the final result is important
4.1.2 Determine how fast the application process needs to be
4.1.2 Determining the degree to which the application requires for decentralization
4.1.3 Determine whether the system can be terminated
4.1.4 Select a suitable consensus algorithm after weighing the advantages and disadvantages
4.2 Future development of consensus mechanism
Last lecture review: Chapter 1 Concept and Function of Consensus Mechanism plus Chapter 2 Origin of Consensus Mechanism
Chapter 3 Common Consensus Mechanisms (Part 1)
Figure 6 Summary of relatively mainstream consensus mechanisms
📷
https://preview.redd.it/9r7q3xra4db51.png?width=567&format=png&auto=webp&s=bae5554a596feaac948fae22dffafee98c4318a7
Source: Hasib Anwar, "Consensus Algorithms: The Root Of The Blockchain Technology"
The picture above shows 14 relatively mainstream consensus mechanisms summarized by a geek Hasib Anwar, including PoW (Proof of Work), PoS (Proof of Stake), DPoS (Delegated Proof of Stake), LPoS (Lease Proof of Stake), PoET ( Proof of Elapsed Time), PBFT (Practical Byzantine Fault Tolerance), SBFT (Simple Byzantine Fault Tolerance), DBFT (Delegated Byzantine Fault Tolerance), DAG (Directed Acyclic Graph), Proof-of-Activity (Proof of Activity), Proof-of- Importance (Proof of Importance), Proof-of-Capacity (Proof of Capacity), Proof-of-Burn ( Proof of Burn), Proof-of-Weight (Proof of Weight).
Next, we will mainly introduce and analyze the top ten consensus mechanisms of the current blockchain.
》POW
-Concept:
Work proof mechanism. That is, the proof of work means that it takes a certain amount of computer time to confirm the work.
-Principle:
Figure 7 PoW work proof principle
📷
https://preview.redd.it/xupacdfc4db51.png?width=554&format=png&auto=webp&s=3b6994641f5890804d93dfed9ecfd29308c8e0cc
The PoW represented by Bitcoin uses the SHA-256 algorithm function, which is a 256-bit hash algorithm in the password hash function family:
Proof of work output = SHA256 (SHA256 (block header));
if (output of proof of work if (output of proof of work >= target value), change the random number, recursive i logic, continue to compare with the target value.
New difficulty value = old difficulty value* (time spent by last 2016 blocks /20160 minutes)
Target value = maximum target value / difficulty value
The maximum target value is a fixed number. If the last 2016 blocks took less than 20160 minutes, then this coefficient will be small, and the target value will be adjusted bigger, if not, the target value will be adjusted smaller. Bitcoin mining difficulty and block generation speed will be inversely proportional to the appropriate adjustment of block generation speed.
-Representative applications: BTC, etc.
》POS
-Concept:
Proof of stake. That is, a mechanism for reaching consensus based on the holding currency. The longer the currency is held, the greater the probability of getting a reward.
-Principle:
PoS implementation algorithm formula: hash(block_header) = Coin age calculation formula: coinage = number of coins * remaining usage time of coins
Among them, coinage means coin age, which means that the older the coin age, the easier it is to get answers. The calculation of the coin age is obtained by multiplying the coins owned by the miner by the remaining usage time of each coin, which also means that the more coins you have, the easier it is to get answers. In this way, pos solves the problem of wasting resources in pow, and miners cannot own 51% coins from the entire network, so it also solves the problem of 51% attacks.
-Representative applications: ETH, etc.
》DPoS
-Concept:
Delegated proof of stake. That is, currency holding investors select super nodes by voting to operate the entire network , similar to the people's congress system.
-Principle:
The DPOS algorithm is divided into two parts. Elect a group of block producers and schedule production.
Election: Only permanent nodes with the right to be elected can be elected, and ultimately only the top N witnesses can be elected. These N individuals must obtain more than 50% of the votes to be successfully elected. In addition, this list will be re-elected at regular intervals.
Scheduled production: Under normal circumstances, block producers take turns to generate a block every 3 seconds. Assuming that no producer misses his order, then the chain they produce is bound to be the longest chain. When a witness produces a block, a block needs to be generated every 2s. If the specified time is exceeded, the current witness will lose the right to produce and the right will be transferred to the next witness. Then the witness is not only unpaid, but also may lose his identity.
-Representative applications: EOS, etc.
》DPoW
-Concept:
Delayed proof of work. A new-generation consensus mechanism based on PoB and DPoS. Miners use their own computing power, through the hash algorithm, and finally prove their work, get the corresponding wood, wood is not tradable. After the wood has accumulated to a certain amount, you can go to the burning site to burn the wood. This can achieve a balance between computing power and mining rights.
-Principle:
In the DPoW-based blockchain, miners are no longer rewarded tokens, but "wood" that can be burned, burning wood. Miners use their own computing power, through the hash algorithm, and finally prove their work, get the corresponding wood, wood is not tradable. After the wood has accumulated to a certain amount, you can go to the burning site to burn the wood. Through a set of algorithms, people who burn more wood or BP or a group of BP can obtain the right to generate blocks in the next event segment, and get rewards (tokens) after successful block generation. Since more than one person may burn wood in a time period, the probability of producing blocks in the next time period is determined by the amount of wood burned by oneself. The more it is burned, the higher the probability of obtaining block rights in the next period.
Two node types: notary node and normal node.
The 64 notary nodes are elected by the stakeholders of the dPoW blockchain, and the notarized confirmed blocks can be added from the dPoW blockchain to the attached PoW blockchain. Once a block is added, the hash value of the block will be added to the Bitcoin transaction signed by 33 notary nodes, and a hash will be created to the dPow block record of the Bitcoin blockchain. This record has been notarized by most notary nodes in the network. In order to avoid wars on mining between notary nodes, and thereby reduce the efficiency of the network, Komodo designed a mining method that uses a polling mechanism. This method has two operating modes. In the "No Notary" (No Notary) mode, all network nodes can participate in mining, which is similar to the traditional PoW consensus mechanism. In the "Notaries Active" mode, network notaries use a significantly reduced network difficulty rate to mine. In the "Notary Public Activation" mode, each notary public is allowed to mine a block with its current difficulty, while other notary public nodes must use 10 times the difficulty of mining, and all normal nodes use 100 times the difficulty of the notary public node.
Figure 8 DPoW operation process without a notary node
📷
https://preview.redd.it/3yuzpemd4db51.png?width=500&format=png&auto=webp&s=f3bc2a1c97b13cb861414d3eb23a312b42ea6547
-Representative applications: CelesOS, Komodo, etc.
CelesOS Research Institute丨DPoW consensus mechanism-combustible mining and voting
》PBFT
-Concept:
Practical Byzantine fault tolerance algorithm. That is, the complexity of the algorithm is reduced from exponential to polynomial level, making the Byzantine fault-tolerant algorithm feasible in practical system applications.
-Principle:
Figure 9 PBFT algorithm principle
📷
https://preview.redd.it/8as7rgre4db51.png?width=567&format=png&auto=webp&s=372be730af428f991375146efedd5315926af1ca
First, the client sends a request to the master node to call the service operation, and then the master node broadcasts other copies of the request. All copies execute the request and send the result back to the client. The client needs to wait for f+1 different replica nodes to return the same result as the final result of the entire operation.
Two qualifications: 1. All nodes must be deterministic. That is to say, the results of the operation must be the same under the same conditions and parameters. 2. All nodes must start from the same status. Under these two limited qualifications, even if there are failed replica nodes, the PBFT algorithm agrees on the total order of execution of all non-failed replica nodes, thereby ensuring security.
-Representative applications: Tendermint Consensus, etc.
Next Lecture: Chapter 3 Common Consensus Mechanisms (Part 2) + Chapter 4 Consensus Mechanism Selection and Status Summary
CelesOS
As the first DPOW financial blockchain operating system, CelesOS adopts consensus mechanism 3.0 to break through the "impossible triangle", which can provide high TPS while also allowing for decentralization. Committed to creating a financial blockchain operating system that embraces supervision, providing services for financial institutions and the development of applications on the supervision chain, and formulating a role and consensus ecological supervision layer agreement for supervision.
The CelesOS team is dedicated to building a bridge between blockchain and regulatory agencies/financial industry. We believe that only blockchain technology that cooperates with regulators will have a real future. We believe in and contribute to achieving this goal.

📷Website
https://www.celesos.com/
📷 Telegram
https://t.me/celeschain
📷 Twitter
https://twitter.com/CelesChain
📷 Reddit
https://www.reddit.com/useCelesOS
📷 Medium
https://medium.com/@celesos
📷 Facebook
https://www.facebook.com/CelesOS1
📷 Youtube
https://www.youtube.com/channel/UC1Xsd8wU957D-R8RQVZPfGA
submitted by CelesOS to u/CelesOS [link] [comments]

Looking for Technical Information about Mining Pools

I'm doing research on how exactly bitcoins are mined, and I'm looking for detailed information about how mining pools work - i.e. what exactly is the pool server telling each participating miner to do.
It's so far my understanding that, when Bitcoins are mined, the following steps take place:
  1. Transactions from the mempool are selected for a new block; this may or may not be all the transactions in said mempool. A coinable transaction - which consists of the miner's wallet's address and other arbitrary data - that will help create new Bitcoin will also be added to the new block.
  2. All of said transactions are hashed together into a Merkle Root. The hashing algorithm is Double SHA-256.
  3. A block header is formed for the new block. Said block header consists of a Version, the Block Hash of the Previous Block in the Blockchain, said Merkle Root from earlier, a timestamp in UTC, the target, and a nonce - which is 32 bits long and can be any value from 0x00000000 to 0xFFFFFFFF (a total of 4,294,967,296 nonce values in total).
  4. The nonce value is set to 0x00000000, and said block header is double hashed to get the Block Hash of the current block; and if said Block Hash starts with a certain number of zeroes (depending on the difficulty), the miner sends the block to the Bitcoin Network, the block successfully added to the blockchain and the miner is awarded with newly created bitcoin.
  5. But if said Block Hash does not start with the required number of zeroes, said block will not be accepted by the network, and the miner Double Hashes the block again, but with a different nonce value; but if none of the 4,294,967,296 nonce values yields a Block Hash with the required number of zeroes, it will be impossible to add the block to the network - and in that case, the miner will either need to change the timestamp and try all 4,294,967,296 nonce values again, or the miner will need to start all over again and compose a new block with a different set of transactions (either a different coinable transaction, a different set of transactions from the mempool, or both).
Now, what I'm trying to figure out is what exactly each miner is doing differently in a mining pool, and if it is different depending on the pool.
One thing I've read is that a mining pool gives each participating miner a different set of transactions from the mempool.
I've also read that, because the most sophisticated miners can try all 4,294,967,296 nonce values in less than a fraction of a second, and since the timestamp can only be updated every second, the coinbase transaction is used as a "second nonce" (although, it is my understanding that, being part of a transaction, if this "extra nonce" is changed, all the transactions need to be double hashed into a new Merkle Root); and I may have read someplace that miners could also be given the same set of transactions from the mempool, but are each told to use a different set of "extra nonce" values for the coinbase transaction.
Is there anything else that pools tell miners to do differently? Is each pool different in the instructions it gives to the participating miners? Did I get anything wrong?
I want to make sure I have a full technical understanding of what mining pools are doing to mine bitcoin.
submitted by sparky77734 to Bitcoin [link] [comments]

Attacking bitcoin with classical simulation of quantum computers?

Bitcoin is a digital currency based on blockchains. Each block in a blockchain contains transaction information, and bitcoin miners "mine" each block in order to verify the transactions. When a miner mines a block, he receives a hefty reward (6.25 bitcoins).
To mine a block, a miner has to do this: SHA256(SHA256(bitcoin block header info))) to produce a hash with a certain amount of leading zeroes.
Now, let's say I want to produce a hash with 94 bits of leading zeroes. If we use the classical brute force method, we would need to iterate through 2^94 SHA256(SHA256(x))) hashes before finding a valid hash output. However, if we use quantum computers, which implement Grover's algorithm, which provides a quadratic speedup, we would only need to iterate through 2^47 hashes.
Is this doable with a quantum computer simulator on a classical computer? If we need to iterate through 2^47 hashes, then 47 qubits should suffice. From what I know, 47 qubits can be simulated on a classical system. Therefore, if we use a 47-qubit quantum computer simulator, can we find a bitcoin block hash with 94 bits of leading zeroes? If so, how long would this take (months? years?)?
I'm not very familiar with quantum computers yet; I'm only starting to learn about it.
submitted by Palpatine88888 to QuantumComputing [link] [comments]

Can a quantum computer be used for bitcoin mining?

This has been bothering me for a while.
I'm a newbie in computer science, and I just found out about Grover’s algorithm, which can only be implemented on a quantum computer. Supposedly it can achieve a quadratic speedup over a classical computer, brute-forcing a solution to a n-bit symmetric encryption key in 2^n/2 iterations.
This led me to think that, by utilizing a quantum computer or quantum simulator of about 40-qubits that runs Grover's algorithm, is it possible to mine bitcoins this way? The current difficulty of bitcoin mining is about 15,466,098,935,554 (approximately 2^44), which means that it would take about 2^44*2^32=2^76 SHA256 hashes before a valid block header hash is found.
However, by implementing Grover's algorithm, we would only need to sort through 2^76/2=2^38 hashes to discover a valid block header hash. A 38-qubit quantum computer should be sufficient in this case - which means the 40-qubit quantum computer should be more than enough to handle bitcoin mining.
Therefore - is it possible to use quantum computers to mine bitcoins this way? I'm not too familiar with quantum computers, so please correct me if I missed something.......
NOTE: I am NOT asking whether it is possible to use quantum computers to break the ECDSA secp256k1 algorithm, which would effectively allow anyone to steal bitcoins from wallets. I know that this would require much more than 40 qubits, and is definitely not happening in the near-future.
Rather, I'm asking about bitcoin mining, which is a much easier problem than trying to break ECDSA secp256k1.
submitted by Palpatine88888 to QuantumComputing [link] [comments]

G. Maxwell just dropped a bomb on the bitcoin-devs mailing list. (what u think?)

G. Maxwell just dropped a bomb on the bitcoin-devs mailing list. (what u think?) submitted by Bitzuca to Bitcoin [link] [comments]

TKEYSPACE — blockchain in your mobile

TKEYSPACE — blockchain in your mobile

https://preview.redd.it/w8o3bcvjrtx41.png?width=1400&format=png&auto=webp&s=840ac3872156215b30e708920edbef4583190654
Someone says that the blockchain in the phone is marketing. This is possible for most applications, but not for Tkeycoin. Today we will talk about how the blockchain works in the TkeySpace app.
Who else is not in the topic, TkeySpace is a financial application for decentralized and efficient management of various cryptocurrencies, based on a distributed architecture without using a client-server.
In simple words, it is a blockchain in the user’s mobile device that excludes hacking and hacker attacks, and all data is encrypted using modern cryptographic methods.
https://preview.redd.it/8uku6thlrtx41.png?width=1280&format=png&auto=webp&s=e1a610244da53100a5bc6b821ee5c799c6493ac4

Blockchain

Let’s start with the most important thing — the blockchain works on the principles of P2P networks, when there is no central server and each device is both a server and a client, such an organization allows you to maintain the network performance with any number and any combination of available nodes.
For example, there are 12 machines in the network, and anyone can contact anyone. As a client (resource consumer), each of these machines can send requests for the provision of some resources to other machines within this network and receive them. As a server, each machine must process requests from other machines in the network, send what was requested, and perform some auxiliary and administrative functions.
With traditional client-server systems, we can get a completely disabled social network, messenger, or another service, given that we rely on a centralized infrastructure — we have a very specific number of points of failure. If the main data center is damaged due to an earthquake or any other event, access to information will be slowed down or completely disabled.
With a P2P solution, the failure of one network member does not affect the network operation in any way. P2P networks can easily switch to offline mode when the channel is broken — in which it will exist completely independently and without any interaction.
Instead of storing information in a single central point, as traditional recording methods do, multiple copies of the same data are stored in different locations and on different devices on the network, such as computers or mobile devices.

https://i.redd.it/2c4sv7rnrtx41.gif
This means that even if one storage point is damaged or lost, multiple copies remain secure in other locations. Similarly, if one part of the information is changed without the consent of the rightful owners, there are many other copies where the information is correct, which makes the false record invalid.
The information recorded in the blockchain can take any form, whether it is a transfer of money, ownership, transaction, someone’s identity, an agreement between two parties, or even how much electricity a light bulb used.
However, this requires confirmation from multiple devices, such as nodes in the network. Once an agreement, otherwise known as consensus, is reached between these devices to store something on the blockchain — it can’t be challenged, deleted, or changed.
The technology also allows you to perform a truly huge amount of computing in a relatively short time, which even on supercomputers would require, depending on the complexity of the task, many years or even centuries of work. This performance is achieved because a certain global task is divided into a large number of blocks, which are simultaneously performed by hundreds of thousands of devices participating in the project.

P2P messaging and syncing in TkeySpace

TkeySpace is a node of the TKEY network and other supported networks. when you launch the app, your mobile node connects to an extensive network of supported blockchains, syncs with full nodes to validate transactions and incoming information between nodes, so the nodes organize a graph of connections between them.
You can always check the node information in the TkeySpace app in the ⚙ Settings Contact and peer info App Status;

https://preview.redd.it/co1k25kqrtx41.png?width=619&format=png&auto=webp&s=e443a436b11d797b475b00a467cd9609cac66b83
TkeySpace creates initiating connections to servers registered in the blockchain Protocol as the main ones, from these servers it gets the addresses of nodes to which it can join, in turn, the nodes to which the connection occurred share information about other nodes.

https://i.redd.it/m21pw88srtx41.gif
TkeySpace sends network messages to nodes from supported blockchains in the app to get up-to-date data from the network.
The Protocol uses data structures for communication between nodes, such as block propagation over the network, so before network messages are read, nodes check the “magic number”, check the first bytes, and determine the type of data structure. In the blockchain, the “magic number” is the network ID used to filter messages and block traffic from other p2p networks.
Magic numbers are used in computer science, both for files and protocols. They identify the type of file/data structure. A program that receives such a file/data structure can check the magic number and immediately find out the intended type of this file/data structure.
The first message that your node sends is called a Version Message. In response, the node waits for a Verack message to establish a connection between other peers. The exchange of such messages is called a “handshake”.

https://preview.redd.it/b6gh0hitrtx41.png?width=785&format=png&auto=webp&s=0101eaec6469fb53818486fa13da110f6a4a851d
After the “handshake” is set, TkeySpace will start connecting to other nodes in the network to determine the last block at the end of the required blockchain. At this point — nodes request information about blocks they know using GetBlock messages — in response, your node receives an inv (Inventory Message) from another node with the information that it has the information that was requested by the TkeySpace node.
In response to the received message, inv — TkeySpace sends a GetData message containing a list of blocks starting immediately after the last known hash.

https://preview.redd.it/lare5lsurtx41.png?width=768&format=png&auto=webp&s=da8d27110f406f715292b439051ca221fab47f77

Loading and storing blocks

After exchanging messages, the block information is loaded and transactions are uploaded to your node. To avoid storing tons of information and optimize hard disk space and data processing speed, we use RDBMS — PostgreSQL in full nodes (local computer wallet).
In the TkeySpace mobile app, we use SQLite, and validation takes place by uploading block headers through the Merkle Tree, using the bloom filter — this allows you to optimize the storage of your mobile device as much as possible.
The block header includes its hash, the hash of the previous block, transaction hashes, and additional service information.
Block headers in the Tkeycoin network=84 bytes due to the extension of parameters to support nChains, which will soon be launched in “combat” mode. The titles of the Bitcoin block, Dash, Litecoin=80 bytes.

https://preview.redd.it/uvv3qz7wrtx41.png?width=1230&format=png&auto=webp&s=5cf0cd8b6d099268f3d941aac322af05e781193c
And so, let’s continue — application nodes receive information from the blockchain by uploading block headers, all data is synchronized using the Merkle Tree, or rather your node receives and validates information from the Merkle root.
The hash tree was developed in 1979 by Ralph Merkle and named in his honor. The structure of the system has received this name also because it resembles a tree.
The Merkle tree is a complete binary tree with leaf vertexes containing hashes from data blocks, and inner vertexes containing hashes from adding values in child vertexes. The root node of the tree contains a hash from the entire data set, meaning the hash tree is a unidirectional hash function. The Merkle tree is used for the efficient storage of transactions in the cryptocurrency blockchain. It allows you to get a “fingerprint” of all transactions in the block, as well as effectively verify transactions.

https://preview.redd.it/3hmbthpxrtx41.png?width=677&format=png&auto=webp&s=cca3d54c585747e0431c6c4de6eec7ff7e3b2f4d
Hash trees have an advantage over hash chains or hash functions. When using hash trees, it is much less expensive to prove that a certain block of data belongs to a set. Since different blocks are often independent data, such as transactions or parts of files, we are interested in being able to check only one block without recalculating the hashes for the other nodes in the tree.
https://i.redd.it/f7o3dh7zrtx41.gif
The Merkle Tree scheme allows you to check whether the hash value of a particular transaction is included in Merkle Root, without having all the other transactions in the block. So by having the transaction, block header, and Merkle Branch for that transaction requested from the full node, the digital wallet can make sure that the transaction was confirmed in a specific block.

https://i.redd.it/88sz13w0stx41.gif
The Merkle tree, which is used to prove that a transaction is included in a block, is also very well scaled. Because each new “layer” added to the tree doubles the total number of “leaves” it can represent. You don’t need a deep tree to compactly prove transaction inclusion, even among blocks with millions of transactions.

Statistical constants and nChains

To support the Tkeycoin cryptocurrency, the TkeySpace application uses additional statistical constants to prevent serialization of Merkle tree hashes, which provides an additional layer of security.
Also, for Tkeycoin, support for multi-chains (nChains) is already included in the TkeySpace app, which will allow you to use the app in the future with most of the features of the TKEY Protocol, including instant transactions.

The Bloom Filter

An additional level of privacy is provided by the bloom filter — which is a probabilistic data structure that allows you to check whether an element belongs to a set.

https://preview.redd.it/7ejkvi82stx41.png?width=374&format=png&auto=webp&s=ed75cd056949fc3a2bcf48b4d7ea78d3dc6d81f3
The bloom filter looks for whether a particular transaction is linked to Alice, not whether Alice has a specific cryptocurrency. In this way, transactions and received IDs are analyzed through a bloom filter. When “Alice wants to know about transaction X”, an ID is requested for transaction X, which is compared with the filled segments in her bloom filter. If “Yes” is received, the node can get the information and verify the transaction.

https://preview.redd.it/gjpsbss3stx41.png?width=1093&format=png&auto=webp&s=4cdcbc827849d13b7d6f0b7e7ba52e65ddc03a82

HD support

The multi-currency wallet TkeySpace is based on HD (or hierarchical determinism), a privacy-oriented method for generating and managing addresses. Each wallet address is generated from an xPub wallet (or extended public key). The app is completely anonymous — and individual address is generated for each transaction to accept a particular cryptocurrency. Even for low-level programming, using the same address is negative for the system, not to mention your privacy. We recommend that you always use a new address for transactions to ensure the necessary level of privacy and security.
The EXT_PUBLIC_KEY and EXT_SECRET_KEY values for DASH, Bitcoin, and Litecoin are completely identical. Tkeycoin uses its values, as well as other methods for storing transactions and blocks (RDBMS), and of course — nChains.

Secret key

Wallets in the blockchain have public and private keys.
https://preview.redd.it/br9kk8n5stx41.png?width=840&format=png&auto=webp&s=a36e4c619451735469a9cff57654d322467e4fba
Centralized applications usually store users’ private keys on their servers, which makes users’ funds vulnerable to hacker attacks or theft.
A private key is a special combination of characters that provides access to cryptocurrencies stored on the account. Only a person who knows the key can move and spend digital assets.
TkeySpace — stores the encrypted key only on the user’s device and in encrypted form. The encrypted key is displayed as a mnemonic phrase (backup phrase), which is very convenient for users. Unlike complex cryptographic ciphers, the phrase is easy to save or write. A backup keyword provides the maximum level of security.
A mnemonic phrase is 12 or 24 words that are generated using random number entropy. If a phrase consists of 12 words, then the number of possible combinations is 204⁸¹² or 21¹³² — the phrase will have 132 security bits. To restore the wallet, you must enter the mnemonic phrase in strict order, as it was presented after generation.

Result

Now we understand that your application TkeySpace is a node of the blockchain that communicates with other nodes using p2p messages, stores block headers and validate information using the Merkle Tree, verifies transactions, filters information using the bloom filter, and operates completely in a decentralized model. The application code contains all the necessary blockchain settings for communicating with the network, the so-called chain parameters.
TkeySpace is a new generation mobile app. A completely new level of security, easy user-friendly interfaces and all the necessary features that are required to work with cryptocurrency.
submitted by tkeycoin to Tkeycoin_Official [link] [comments]

By the power of CTOR! Xthinner is now working with BCH mainnet blocks

A few hours ago, I fixed the last showstopping bug in my Xthinner code and got it running between two of my ABC full nodes on mainnet. One node serves as a bridge to the rest of the world, receiving Compact Blocks and transmitting Xthinner. The other is connected to no other nodes except this bridge.
The first block transmitted by Xthinner was #577,310. My nodes had just started when that block was published, so it was transmitted with only 24 transactions in mempool out of 2865 total in the block. It worked nonetheless. Xthinner has worked on every block since then, with no failures, and with no block taking more than 1.5 networking round trips. Most non-tiny blocks have gotten about 99.0% compression after fetching missing transactions, or about 99.3% before fetching. In comparison, Compact Blocks usually gets about 96-97% edit: 98.5% compression. Eight blocks have been complete on arrival without any missing transaction fetching (0.5 round trips), and 24 blocks have required a round trip to fetch missing transactions. Edit: This missing transaction rate is quite high, and probably the result of the chained-nodes test setup. Each hop in a node chain adds up to 5 seconds of delay in transaction propagation, and this setup has 2 chain hops. I expect performance to improve in more normal configurations.
I will probably make an alpha code release soon so that people can play around with it. The code still has some known bugs and vulnerabilities, though, so don't run it on anything you want to stay running. There's still a lot of work to be done before the code is of high enough quality to be merged into Bitcoin ABC, so don't get too excited.
Here's the best-performing block so far:
2019-04-08 09:27:53.076818 received: xtrblk (1660 bytes) peer=0 2019-04-08 09:27:53.077210 Filling xtrblk with mempool size 841 2019-04-08 09:27:53.077644 xtrblk: 841 tx, 1 prefilled 2019-04-08 09:27:53.077707 Received complete xthinner block: 000000000000000002f914b0c6afb568bec86b9a5166a5023f466c5ee7100e90. 2019-04-08 09:27:53.136257 UpdateTip: new best=000000000000000002f914b0c6afb568bec86b9a5166a5023f466c5ee7100e90 height=577332 version=0x20800000 log2_work=87.837579 tx=269896356 date='2019-04-08 09:27:30' progress=1.000000 cache=10.6MiB(79763txo) warning='40 of last 100 blocks have unexpected version' 
This was a 841 tx, 363 kB block transmitted in 1660 bytes. That's 99.54% compression or 15.79 bits/tx. Uncoincidentally, this was also one of the largest blocks so far, with 23 minutes elapsed since the prior block.
Bigger blocks get better compression because the header, coinbase, and checksum specification overhead is a smaller proportion of the whole, and sometimes also because the Xthinner algorithm can more consistently omit the initial bytes of the TXID.
Sizes of the xtrblk messages:
2019-04-08 06:17:48.394401 received: xtrblk (4511 bytes) peer=0 2019-04-08 06:34:40.219904 received: xtrblk (1249 bytes) peer=0 2019-04-08 06:50:25.290082 received: xtrblk (1209 bytes) peer=0 2019-04-08 06:51:49.082137 received: xtrblk (282 bytes) peer=0 2019-04-08 07:04:02.028427 received: xtrblk (416 bytes) peer=0 2019-04-08 07:09:44.603728 received: xtrblk (1235 bytes) peer=0 2019-04-08 07:15:32.338061 received: xtrblk (351 bytes) peer=0 2019-04-08 07:17:25.983502 received: xtrblk (839 bytes) peer=0 2019-04-08 07:19:38.947229 received: xtrblk (498 bytes) peer=0 2019-04-08 07:21:22.099113 received: xtrblk (404 bytes) peer=0 2019-04-08 07:37:20.573195 received: xtrblk (569 bytes) peer=0 2019-04-08 07:38:41.106193 received: xtrblk (1259 bytes) peer=0 2019-04-08 07:46:40.656947 received: xtrblk (764 bytes) peer=0 2019-04-08 07:52:40.203599 received: xtrblk (591 bytes) peer=0 2019-04-08 08:01:30.239679 received: xtrblk (776 bytes) peer=0 2019-04-08 08:26:06.212842 received: xtrblk (287 bytes) peer=0 2019-04-08 08:37:10.882075 received: xtrblk (2177 bytes) peer=0 2019-04-08 08:39:05.003971 received: xtrblk (392 bytes) peer=0 2019-04-08 08:40:27.191932 received: xtrblk (274 bytes) peer=0 2019-04-08 08:53:57.338920 received: xtrblk (1294 bytes) peer=0 2019-04-08 08:54:44.033299 received: xtrblk (344 bytes) peer=0 2019-04-08 09:04:55.541082 received: xtrblk (947 bytes) peer=0 2019-04-08 09:27:53.076818 received: xtrblk (1660 bytes) peer=0 2019-04-08 09:39:21.527632 received: xtrblk (878 bytes) peer=0 2019-04-08 09:48:57.831915 received: xtrblk (836 bytes) peer=0 2019-04-08 09:49:18.074036 received: xtrblk (243 bytes) peer=0 2019-04-08 09:52:09.949254 received: xtrblk (474 bytes) peer=0 2019-04-08 10:05:35.192227 received: xtrblk (451 bytes) peer=0 2019-04-08 10:12:37.671585 received: xtrblk (1317 bytes) peer=0 2019-04-08 10:12:40.761272 received: xtrblk (294 bytes) peer=0 2019-04-08 10:13:10.548404 received: xtrblk (278 bytes) peer=0 2019-04-08 10:17:06.108110 received: xtrblk (512 bytes) peer=0 
Sizes of the fetched missing transactions:
2019-04-08 06:17:48.410703 received: xtrtxn (842930 bytes) peer=0 2019-04-08 06:34:40.221133 received: xtrtxn (5691 bytes) peer=0 2019-04-08 06:50:25.291309 received: xtrtxn (517 bytes) peer=0 2019-04-08 07:04:02.029652 received: xtrtxn (3461 bytes) peer=0 2019-04-08 07:09:44.604922 received: xtrtxn (744 bytes) peer=0 2019-04-08 07:15:32.339450 received: xtrtxn (1155 bytes) peer=0 2019-04-08 07:17:25.984684 received: xtrtxn (3337 bytes) peer=0 2019-04-08 07:19:38.948412 received: xtrtxn (654 bytes) peer=0 2019-04-08 07:21:22.100418 received: xtrtxn (3510 bytes) peer=0 2019-04-08 07:37:20.574477 received: xtrtxn (3990 bytes) peer=0 2019-04-08 07:38:41.107558 received: xtrtxn (519 bytes) peer=0 2019-04-08 07:52:40.204659 received: xtrtxn (2364 bytes) peer=0 2019-04-08 08:01:30.240842 received: xtrtxn (275 bytes) peer=0 2019-04-08 08:26:06.214200 received: xtrtxn (274 bytes) peer=0 2019-04-08 08:39:05.005097 received: xtrtxn (273 bytes) peer=0 2019-04-08 08:53:57.340233 received: xtrtxn (514 bytes) peer=0 2019-04-08 08:54:44.034397 received: xtrtxn (1243 bytes) peer=0 2019-04-08 09:04:55.542438 received: xtrtxn (420 bytes) peer=0 2019-04-08 09:39:21.528842 received: xtrtxn (811 bytes) peer=0 2019-04-08 09:49:18.075155 received: xtrtxn (274 bytes) peer=0 2019-04-08 09:52:09.950762 received: xtrtxn (10478 bytes) peer=0 2019-04-08 10:05:35.193791 received: xtrtxn (8248 bytes) peer=0 2019-04-08 10:12:40.762645 received: xtrtxn (1741 bytes) peer=0 
As a reminder: Xthinner does not affect storage, RAM, or CPU requirements for full nodes in any way, and has very little effect on total network traffic, which is dominated by tx announcements and historical block uploads. Xthinner's compression only affects block propagation speed. Block propagation is the code path that is most sensitive to performance and latency for keeping Bitcoin decentralized while scaling, and has long been a sore point, so this optimization is worthwhile. But its effects are limited to that code path.
Edit 4/18/2019: I tracked down the cause of the high missing/colliding transaction rate and associated extra round trips to an off-by-one bug in my encoder. The code was checking how many bytes were needed to disambiguate from the 2nd-closest mempool match instead of the closest mempool match. Since fixing this bug a few hours ago, only 1 out of 27 block transmission attempts have required an extra round trip for tx fetching.
submitted by jtoomim to btc [link] [comments]

Support SegWit - Choose Your Preferred Miner

submitted by BuyBTCuk to Bitcoin [link] [comments]

80.241.217.46 mining 18 blocks today containing mostly 1 -> 64 -> -128 -> 256 -> 512 transactions

Who is 80.241.217.46? This IP is mainly producing blocks with 2 base system pattern... even blocks with 1 transaction. Seems like a waste not to include more transaction and looks rather suspicious to me. Currently they got 3 of the past 4 blocks so they seem strong. Server located in Germany.
Is this the reason why Ive been waiting almost an hour for confirmation of my 0.0001 fee transaction?
UPDATE: Only 192 transactions has been confirmed in over 1,5 hour because of this pool.
submitted by nostr to Bitcoin [link] [comments]

Groestlcoin 6th Anniversary Release

Introduction

Dear Groestlers, it goes without saying that 2020 has been a difficult time for millions of people worldwide. The groestlcoin team would like to take this opportunity to wish everyone our best to everyone coping with the direct and indirect effects of COVID-19. Let it bring out the best in us all and show that collectively, we can conquer anything.
The centralised banks and our national governments are facing unprecedented times with interest rates worldwide dropping to record lows in places. Rest assured that this can only strengthen the fundamentals of all decentralised cryptocurrencies and the vision that was seeded with Satoshi's Bitcoin whitepaper over 10 years ago. Despite everything that has been thrown at us this year, the show must go on and the team will still progress and advance to continue the momentum that we have developed over the past 6 years.
In addition to this, we'd like to remind you all that this is Groestlcoin's 6th Birthday release! In terms of price there have been some crazy highs and lows over the years (with highs of around $2.60 and lows of $0.000077!), but in terms of value– Groestlcoin just keeps getting more valuable! In these uncertain times, one thing remains clear – Groestlcoin will keep going and keep innovating regardless. On with what has been worked on and completed over the past few months.

UPDATED - Groestlcoin Core 2.18.2

This is a major release of Groestlcoin Core with many protocol level improvements and code optimizations, featuring the technical equivalent of Bitcoin v0.18.2 but with Groestlcoin-specific patches. On a general level, most of what is new is a new 'Groestlcoin-wallet' tool which is now distributed alongside Groestlcoin Core's other executables.
NOTE: The 'Account' API has been removed from this version which was typically used in some tip bots. Please ensure you check the release notes from 2.17.2 for details on replacing this functionality.

How to Upgrade?

Windows
If you are running an older version, shut it down. Wait until it has completely shut down (which might take a few minutes for older versions), then run the installer.
OSX
If you are running an older version, shut it down. Wait until it has completely shut down (which might take a few minutes for older versions), run the dmg and drag Groestlcoin Core to Applications.
Ubuntu
http://groestlcoin.org/forum/index.php?topic=441.0

Other Linux

http://groestlcoin.org/forum/index.php?topic=97.0

Download

Download the Windows Installer (64 bit) here
Download the Windows Installer (32 bit) here
Download the Windows binaries (64 bit) here
Download the Windows binaries (32 bit) here
Download the OSX Installer here
Download the OSX binaries here
Download the Linux binaries (64 bit) here
Download the Linux binaries (32 bit) here
Download the ARM Linux binaries (64 bit) here
Download the ARM Linux binaries (32 bit) here

Source

ALL NEW - Groestlcoin Moonshine iOS/Android Wallet

Built with React Native, Moonshine utilizes Electrum-GRS's JSON-RPC methods to interact with the Groestlcoin network.
GRS Moonshine's intended use is as a hot wallet. Meaning, your keys are only as safe as the device you install this wallet on. As with any hot wallet, please ensure that you keep only a small, responsible amount of Groestlcoin on it at any given time.

Features

Download

iOS
Android

Source

ALL NEW! – HODL GRS Android Wallet

HODL GRS connects directly to the Groestlcoin network using SPV mode and doesn't rely on servers that can be hacked or disabled.
HODL GRS utilizes AES hardware encryption, app sandboxing, and the latest security features to protect users from malware, browser security holes, and even physical theft. Private keys are stored only in the secure enclave of the user's phone, inaccessible to anyone other than the user.
Simplicity and ease-of-use is the core design principle of HODL GRS. A simple recovery phrase (which we call a Backup Recovery Key) is all that is needed to restore the user's wallet if they ever lose or replace their device. HODL GRS is deterministic, which means the user's balance and transaction history can be recovered just from the backup recovery key.

Features

Download

Main Release (Main Net)
Testnet Release

Source

ALL NEW! – GroestlcoinSeed Savior

Groestlcoin Seed Savior is a tool for recovering BIP39 seed phrases.
This tool is meant to help users with recovering a slightly incorrect Groestlcoin mnemonic phrase (AKA backup or seed). You can enter an existing BIP39 mnemonic and get derived addresses in various formats.
To find out if one of the suggested addresses is the right one, you can click on the suggested address to check the address' transaction history on a block explorer.

Features

Live Version (Not Recommended)

https://www.groestlcoin.org/recovery/

Download

https://github.com/Groestlcoin/mnemonic-recovery/archive/master.zip

Source

ALL NEW! – Vanity Search Vanity Address Generator

NOTE: NVidia GPU or any CPU only. AMD graphics cards will not work with this address generator.
VanitySearch is a command-line Segwit-capable vanity Groestlcoin address generator. Add unique flair when you tell people to send Groestlcoin. Alternatively, VanitySearch can be used to generate random addresses offline.
If you're tired of the random, cryptic addresses generated by regular groestlcoin clients, then VanitySearch is the right choice for you to create a more personalized address.
VanitySearch is a groestlcoin address prefix finder. If you want to generate safe private keys, use the -s option to enter your passphrase which will be used for generating a base key as for BIP38 standard (VanitySearch.exe -s "My PassPhrase" FXPref). You can also use VanitySearch.exe -ps "My PassPhrase" which will add a crypto secure seed to your passphrase.
VanitySearch may not compute a good grid size for your GPU, so try different values using -g option in order to get the best performances. If you want to use GPUs and CPUs together, you may have best performances by keeping one CPU core for handling GPU(s)/CPU exchanges (use -t option to set the number of CPU threads).

Features

Usage

https://github.com/Groestlcoin/VanitySearch#usage

Download

Source

ALL NEW! – Groestlcoin EasyVanity 2020

Groestlcoin EasyVanity 2020 is a windows app built from the ground-up and makes it easier than ever before to create your very own bespoke bech32 address(es) when whilst not connected to the internet.
If you're tired of the random, cryptic bech32 addresses generated by regular Groestlcoin clients, then Groestlcoin EasyVanity2020 is the right choice for you to create a more personalised bech32 address. This 2020 version uses the new VanitySearch to generate not only legacy addresses (F prefix) but also Bech32 addresses (grs1 prefix).

Features

Download

Source

Remastered! – Groestlcoin WPF Desktop Wallet (v2.19.0.18)

Groestlcoin WPF is an alternative full node client with optional lightweight 'thin-client' mode based on WPF. Windows Presentation Foundation (WPF) is one of Microsoft's latest approaches to a GUI framework, used with the .NET framework. Its main advantages over the original Groestlcoin client include support for exporting blockchain.dat and including a lite wallet mode.
This wallet was previously deprecated but has been brought back to life with modern standards.

Features

Remastered Improvements

Download

Source

ALL NEW! – BIP39 Key Tool

Groestlcoin BIP39 Key Tool is a GUI interface for generating Groestlcoin public and private keys. It is a standalone tool which can be used offline.

Features

Download

Windows
Linux :
 pip3 install -r requirements.txt python3 bip39\_gui.py 

Source

ALL NEW! – Electrum Personal Server

Groestlcoin Electrum Personal Server aims to make using Electrum Groestlcoin wallet more secure and more private. It makes it easy to connect your Electrum-GRS wallet to your own full node.
It is an implementation of the Electrum-grs server protocol which fulfils the specific need of using the Electrum-grs wallet backed by a full node, but without the heavyweight server backend, for a single user. It allows the user to benefit from all Groestlcoin Core's resource-saving features like pruning, blocks only and disabled txindex. All Electrum-GRS's feature-richness like hardware wallet integration, multi-signature wallets, offline signing, seed recovery phrases, coin control and so on can still be used, but connected only to the user's own full node.
Full node wallets are important in Groestlcoin because they are a big part of what makes the system be trust-less. No longer do people have to trust a financial institution like a bank or PayPal, they can run software on their own computers. If Groestlcoin is digital gold, then a full node wallet is your own personal goldsmith who checks for you that received payments are genuine.
Full node wallets are also important for privacy. Using Electrum-GRS under default configuration requires it to send (hashes of) all your Groestlcoin addresses to some server. That server can then easily spy on your transactions. Full node wallets like Groestlcoin Electrum Personal Server would download the entire blockchain and scan it for the user's own addresses, and therefore don't reveal to anyone else which Groestlcoin addresses they are interested in.
Groestlcoin Electrum Personal Server can also broadcast transactions through Tor which improves privacy by resisting traffic analysis for broadcasted transactions which can link the IP address of the user to the transaction. If enabled this would happen transparently whenever the user simply clicks "Send" on a transaction in Electrum-grs wallet.
Note: Currently Groestlcoin Electrum Personal Server can only accept one connection at a time.

Features

Download

Windows
Linux / OSX (Instructions)

Source

UPDATED – Android Wallet 7.38.1 - Main Net + Test Net

The app allows you to send and receive Groestlcoin on your device using QR codes and URI links.
When using this app, please back up your wallet and email them to yourself! This will save your wallet in a password protected file. Then your coins can be retrieved even if you lose your phone.

Changes

Download

Main Net
Main Net (FDroid)
Test Net

Source

UPDATED – Groestlcoin Sentinel 3.5.06 (Android)

Groestlcoin Sentinel is a great solution for anyone who wants the convenience and utility of a hot wallet for receiving payments directly into their cold storage (or hardware wallets).
Sentinel accepts XPUB's, YPUB'S, ZPUB's and individual Groestlcoin address. Once added you will be able to view balances, view transactions, and (in the case of XPUB's, YPUB's and ZPUB's) deterministically generate addresses for that wallet.
Groestlcoin Sentinel is a fork of Groestlcoin Samourai Wallet with all spending and transaction building code removed.

Changes

Download

Source

UPDATED – P2Pool Test Net

Changes

Download

Pre-Hosted Testnet P2Pool is available via http://testp2pool.groestlcoin.org:21330/static/

Source

submitted by Yokomoko_Saleen to groestlcoin [link] [comments]

$2,500 Bitcoin Giveaway (BitBlock Signature) - YouTube Bitcoins Erklärung: In nur 12 Min. Bitcoin verstehen ... Bitcoin 101 - Merkle Roots and Merkle Trees - Bitcoin ... Blockchain/Bitcoin for beginners 7: Blockchain header: Merkle roots and SPV transaction verification 80 Trillion Dollar Bitcoin Exit Plan - YouTube

And this is the Bits field you’ll find in the block header for it: Bits: 0x180696f4 Converting Bits. Bits is just a shorthand version of the Target. It looks alien at first, but it’s basically split in to two parts: Exponent: This gives you the size of the target in bytes. Coefficient: This gives you the initial 3 bytes of the target. I think of it as using the exponent to calculate how ... In bitcoin the service string is encoded in the block header data structure, and includes a version field, the hash of the previous block, the root hash of the merkle tree of all transactions in the block, the current time, and the difficulty. Bitcoin stores the nonce in the extraNonce field which is part of the coinbase transaction, which is stored as the left most leaf node in the merkle ... Bitcoin Block Header Hashing. One of the most important computation of the Bitcoin nodes is to compute block header hash. Using a real block 125552 to demonstrate how to compute block header hash from the 6 header fields. import sys sys.byteorder # #check endian system of the processor 'little' from struct import pack, unpack, unpack_from # #packing: converting interger 1 to little endian ... The target threshold is a 256-bit unsigned integer which a header hash must be equal to or below in order for that header to be a valid part of the block chain. However, the header field nBits provides only 32 bits of space, so the target number uses a less precise format called “compact” which works like a base-256 version of scientific notation: When a miner is trying to mine this block, the Unix time at which this block header is being hashed is noted within the block header itself. Bits: A shortened version of the Target. Nonce: The field that miners change in order to try and get a hash of the block header (a Block Hash) that is below the Target. Data Structure. Field Size Data; Version: 4 bytes: Little-endian: Previous Block Hash ...

[index] [39151] [26092] [25842] [32048] [47455] [33527] [13017] [9400] [2488] [25807]

$2,500 Bitcoin Giveaway (BitBlock Signature) - YouTube

We are giving away $500 to 5 people 14th of January 2021 To enter : Like the video Click the link here to sign-up - https://bitblocksignature.com/registe... LIVE from BitBlockBoom Bitcoin Conf - Texas 👉 Subscribe so you don't miss the next one: http://bit.ly/2QKVDdV Check below for events and news The Fina... Start trading Bitcoin and cryptocurrency here: http://bit.ly/2Vptr2X Bitcoin is the first decentralized digital currency. All Bitcoin transactions are docume... Bitcoin für Anfänger einfach erklärt! [auf Deutsch] Bitcoin-Börse (erhalte 10€ in BTC) https://finanzfluss.de/go/bitcoin-boerse *📱 Sicheres Bitcoin-Wallet... Blockchain/Bitcoin for beginners 6: blocks and mining, content and creation of bitcoin ... Merkle Roots and Merkle Trees - Bitcoin Coding and Software - The Block Header - Duration: 24:18. CRI ...

#