HydraDX is a Basilisk cross-chain liquidity protocol on Kusama founded by Mattia Gagliardi and Jakub Greguš.
HydraDX is a crosschain liqidity protocol built on Substrate, and is a parachain in the Polkadot network.
AMA
In the old financial world, liquidity is fragmented among many gatekeepers, protecting their business interests, profiting from the upside, while socializing the downside. The cost of liquidity is also high because legacy technology consists of siloed databases.
Blockchain is changing this unfortunate reality by providing an open and universally accessible technology layer for transaction settlement and asset exchange.
However, the dominant blockchain application platform – Ethereum – suffers from being a general platform for many varied use cases – the network is getting clogged and fees are high.
In order to achieve the optimal properties, HydraDX is built as a parachain – specialized blockchain in the Polkadot network. It is benefiting from shared security, speed and flexibility of the Substrate framework while remaining optimized for a single purpose: enabling fluid programmable value exchange.
Thanks to planned interoperability between Polkadot and Ethereum ecosystems, Hydra will tap into Ethereum’s liquidity, talent and community, merging the best of the both worlds.
1)https://hydradx.substack.com/p/hydradx-omnipool-part-1
Earlier, the HydraDX team published an article on their blog in which they introduced the new LHDX token. This caused a lot of confusion among the community and many people asked the team to provide more details on how this token will be used, will it decrease the value of the original HDX token or increase it? The team decided to hold an AMA session on Discord as soon as possible, we have made a detailed translation of this AMA session for you.
Jakub Gregus — I will try to describe as simply as possible our vision behind the LHDX token and why we had to make these changes, as well as the main problems and concerns associated with the previous design of the HDX token.
Imagine that some liquidity mining whales who often use their capital in various liquidity stimulation programs. Let's imagine that these guys want to have power over the entire HydraDX protocol, whether it's short term or long term, so they can get power in Sushiswap or other protocols through liquidity mining. In our case, it can be even worse, because the security of Proof-Of-Stake protocols is usually at a high level, because if someone wants to buy any POS tokens, it does not matter Polkadot or Kusama, Solana or any other POS blockchain. As more tokens are bought on the market, slippage and fees make any additional purchases more difficult because the price starts to rise a lot.
In fact, due to the design of the Omnipool, there will be no increase in price and some whales may invest a lot of capital in Omnipool, which will immediately increase the overall size of the Omnipool. They have 15,000 BTC and are putting it into the Omnipool. During this time, Omnipool will match the amount of liquidity they have deposited and the pool will be huge, which means that slippage will be practically zero, even on any large trade. Even if they start buying millions of HDX, the price will still not move, which is also bad for HDX holders who want to see some increase in the value of the HDX token over time. It's also bad from a security point of view of the entire protocol, because someone could perform an attack that would be much cheaper than an attack on any other Proof-Of-Stake system, because there would be no restrictions on the purchase of HDX tokens. The main problem is that they can gain control over the protocol. They don't even need to provide a lot of liquidity to the pool because Omnipool can be backed by a lot of different assets. If they are smart and patient, it will be impossible to identify these attempts and distinguish them from normal liquidity mining and they can easily carry out this attack. That's why we thought more deeply about it when we saw the current Curve wars, Magic Internet Money is being used to devastate Anchor's yield resources.
Protocol manipulation attacks can have a very low threshold to carry out, this scares us a lot as we take the overall security of the platform as seriously as possible, which also leads to delays in front-end development on Basilisk, since most of this code, approximately 80% will also be used in HydraDX. We are rethinking things even more seriously than before. We realized that “oh shit, this can be very dangerous” and after that we had long internal discussions about this. Before posting about the introduction of the second token, we expected that this could cause a lot of questions, but no one expected that this would cause a FUD of such force, supposedly we are stealing liquidity from the protocol and leaving for the islands. It was clearly ugly and not even close to the truth. That's the story, and I hope everyone understands why we're going to use this protocol design and why it's actually bullish, even for HDX. Thus, the HDX token will have a very strong position in the protocol. So we can move on to questions.
lolmschizz - Governance in Substrate is not the same as governance in any other Ethereum protocol. Almost everything can be changed, it does not matter who will implement it.
Jakub Gregus - For example, dYdX governance token or OpenSea governance token. They don't have much power over the protocols. The dYdX team has been pretty clear about the process of managing dYdX and voting with dYdX, and they were quick to point out that you won't be able to enable sharing of commissions earned by the dYdX protocol. On the other hand, on Substrate, you can choose to switch the network connection from Polkadot to Cosmos, whatever the token holders desire. Someone will prepare the necessary code for this, and the token holders vote for it, and it's not difficult at all if you really want to do it. On the other hand, the runtime update is a very powerful and useful tool and should be taken seriously because it can change absolutely everything.
lolmcshizz - A question a council member asked us. What is the use and the potential value of HDX if we move to a two-token model? What criteria will be used to increase the value of HDX?
What is the advantage of these two tokens? What exactly will be possible to do with HDX?
Jakub Gregus - Yes, exactly. It will be much more difficult to achieve the appreciation and value of HDX tokens if the protocol is backed by billions of assets, which is what we have been aiming for since day one, it will be almost impossible to push the price higher because slippage will be zero even if we trade millions of HDX simultaneously.
Colin - Whatever the pool token, it can be anything, in our case it is LHDX, whatever this central liquidity token is, its supply can be changed very drastically. Therefore, our main concern was that if the protocol succeeds and a lot of liquidity inflows, and HDX is the central liquidity token, each HDX token and its supply will increase greatly, which will dilute the supply of tokens from those participants who hold HDX. Thus, millions of HDX tokens are issued and dilute the total supply - this can be a big hurdle. At the same time, the second LHDX token allows us to take on the volatility of the supply, which gives us a stronger token economy. I would say that HDX and LHDX are tokens that are in a zero-sum game, and with the right design, it can be positive. We need to think about the right combination and features for these two tokens and what system will work well. Here are some applications of the HDX token in protocol governance: if the system works well, the ability to decide which token will be added to this protocol or where exactly the fees will be distributed. The value of such management should certainly increase.
Jakub Gregus - Exactly, the whole history of the market, I was against governance tokens, I really hate them. In my opinion, it's really unfair to the community to give some tokens that are meant to govern the protocol, but in reality, they don't have the power to change anything. We see such cases with OpenSea or dYdX tokens. For HydraDX, this is a completely different scenario, the community can really change everything and choose where to extract value and where not. This should also be decided by the community and researched in every possible way on how best to capitalize on all of HDX's infrastructure and liquidity.
Q - Was the decision to redesign the token entirely in light of recent developments in the liquidity war? Does this sound like what is happening now where people are really trying to take control of the governance tokens because they actually decide where the liquidity goes? Is this to make sure that no group of whales can get hold of a lot of control tokens and have fun protocol games?
Jakub Gregus - In fact, yes, for us it was one of the bells.
Question - Do all these arguments of the community members about the LHDX token have ground under them - “Oh no, the second token appeared out of nowhere and the team was silent all this time, they probably want to deceive us”? As for me, this is an obvious solution to keep the system secure in the long run.
Jakub Gregus - Yes, we thought about it from the very beginning, but Curve Wars and other control token hacks were the last wake-up call for us. We decided for ourselves, “We can’t let this take its course and hope that this story does not affect us.” Huge money will be at stake, and there will be people who will try to cheat the system.
Jakub Panik - Perhaps I could add something from myself. We thought about these problems from the very beginning and we had some solutions to solve them. For example, the introduction of fees or a slight change in the Omnipool model, but each decision introduces serious problems in the design of the protocol. For example, if we change the design of Omnipool, then computing transactions on the network becomes too complex. This will become less efficient for trading. The second solution would interfere with the oracles, because it would create a huge spread on HDX. So we thought about limiting the purchase of HDX from the pool, but that wasn't the ideal solution and we felt it was the only good solution. In the end, the two-token solution turned out to be the best, because there were restrictions on the HDX supply that the previous design did not have.
Jakub Gregus - We spent hundreds of hours creating it with Blockscience and eventually came up with this mechanic. They have a very deep understanding of governance tokens. Blockscience actually simulated an attack on Gitcoin.
Question - I have a question about adding new tokens. Do you have an idea about the conditions for adding new tokens? Are we going to focus on big tokens at the very beginning like DOT, ETH? Will we also give smaller tokens a chance and how often do you plan to add new tokens? Rech Is it about one token per week, ten tokens per week?
Jakub Gregus — Yes, I've been thinking about it a lot lately. We can focus on major tokens and blue chips that are included in some indexes like DPI, MakerDAO, Compound, Aave. All tokens that pass security checks. They have internal security audit teams that dive very deep into all the possible risks associated with these tokens. All of them are fairly secure, then all L1 tokens that could be transferred to the bridge, and most of the legitimate projects from them. I can imagine that some legitimate teams with new projects can be listed faster. For example, a team at the level of Acala or Interlay appears in the ecosystem and we will see that they do an excellent job in all directions. They have security audits and there is little reason for us to wait a year or more for them to be listed. Obviously, we will also have certain volume and liquidity requirements.
Jakub Panik - We can also use Basilisk to assess whether assets have enough volume or are they safe enough for us. In the future, we are thinking about mechanisms that will allow us to include even less secure assets in Omnipool, but they will be limited by liquidity, so they will not pose a big risk to other assets.
Jakub Gregus - The key will be to combine multiple oracles. Chainlink has been field tested, but not everyone agrees. We can combine several other oracles and sources.
Question - The whole DeFi market has become quite crowded over the last year, various DEXs, various networks. What is your marketing strategy for generating sustainable liquidity on HydraDX? Omnipool seems like a pretty tidbit to hack, how do you rate Omnipool's liquidity drain risk and what are your risk mitigation measures?
Jakub Panik - The first thing to do is not to accept assets that are not safe, but we are also thinking of a few measures:
1 - Fees that are based on volume, if there is a huge pressure on one token, then we increase the fee to infinity so that traders cannot withdraw all liquidity in a short time
2 - We're thinking about an emergency stop button, this process will put it to a popular vote if something really bad happens, we should have that emergency measure in place to be able to stop trading if enough people decide to do so. I don't know if other projects have this feature.
Jakub Gregus - It's called "automatic fuse". I also have a vision for how to make decentralized storage more secure. I have an Entropy project in my field of vision, this project is one of the coolest Substrate, which will make trading much more secure, because all assets will be stored elsewhere, perhaps on Statemine or Statemint, DOTsama DeFi will use the rights to them and there will be some withdrawal period to have some buffer if any hack or any miscalculation or error in the code happens, not only in HydraDX or Basilisk, but also in bridges, XCM and other network components. And then, obviously, the most standard measures, such as security audits. We are trying to attract companies that specialize in this, but they have a lot of work in various ecosystems, mainly in Ethereum.
The first question was about the strategy that will help us succeed. In the short term, it will be liquidity mining initiatives that will be subsidized not only by us, but by us and other projects in the Dotsama ecosystem that will indeed offer their own incentives, so there will be incentives for a large influx of users into the ecosystem. There is also some talk of incentivizing the initial launch of ecosystem-wide liquidity through the Kusama and Polkadot treasuries. In fact, the Polkadot Treasury is the best candidate for this. The Polkadot ecosystem is far behind other ecosystems, so we need to put in as much effort as possible on every front, and large incentive programs are being successfully used by most of the alternative ecosystems. So, we have no other way to avoid this.
So, in the medium/long term, bonds that are a bit like Olympus bonds can be a great solution, but ours will be different, they will be similar to traditional bonds. In the case of Olympus, you have bonds with a maturity of 5 days, in fact we will be holding a kind of auction that will encourage bonds with the maximum possible maturity. There may be people who decide to lock their liquidity in HydraDX even for years. I would probably do the same, it will allow us to offer the highest APY we have in the secondary bond market. You can use these bonds as collateral on Angular or maybe other markets because bonds are much more effective at maintaining liquidity than liquidity initiatives. Most of them are very wasteful, this is another observation from the liquidity wars where you have invested billions of dollars in some pools, while these pools generate only a few thousand dollars of commission per day or their volume is much less, so the commissions are very small, so you don't need to incentivize pools to be so bloated.
But what you really need is to incentivize its use, so we will introduce incentives for trading volume, that makes a lot more sense, there is nothing unique or revolutionary about it, because all CEX, I don't mean only crypto CEX, but also stock exchanges or any exchanges in the world use this throughout the history of trading, incentivizing market makers or other parties by offering them discounts on commissions or negative discounts, they really make money on volumes, because if you have liquidity pools with billions of assets , which are just there and no one uses them for trade or exchange, this is not profitable and even wasteful.
And it's not just a look at AMM incentivization, but it's also incentivizing integrations to connect to HydraDX and Basilisk, you allow users to pay fees in the destination protocol they want to pay. For example, users of some wallet need to call a smart contract on Astar or ETH or somewhere else, but they do not want to pay a fee in more scarce assets. They want a predictable fee for this, so HydraDX and Basilisk can provide this service on the Back-end. Users will be able to pay fees in stablecoins whenever they want and using any wallet that will be integrated into our SDK. This is one of the first and most obvious use cases that we will have to finish and prepare on our end. We have seen that AMMs are perfect for these use cases, improving the experience in the cryptosphere, so this is a clear incentive for other application developers to connect with us. The second thing is moving our design forward and improving it over time, implementing some of the other features that we have in development. I've even compiled a short list of features that we'll be offering from the early days of HydraDX that have been implemented in other protocols by other teams. We can't wait to put everything together to maximize the efficiency of DeFi capital. We will release an updated roadmap with detailed bullet points that will
Jakub Gregus — We had communication problems for almost our entire history, because until now, we were a very small team of 6 people who managed the entire project, tested and integrated all the new products from Polkadot, Substrate and Parity. We've been doing all this insanely complex research, but fortunately, we've filled in all the missing positions in our organization that we've only recently realized, to our regret. We were too deeply immersed in research or development. For most of November and December, we were looking for people who could fill the holes in the organization, not only in the position of developers, but also in the field of communication, organization, operations and in general, those people who could provide a more professional approach to everything, because very soon We are already planning to launch the project.
All communication with the community should be more subtle and more careful about all changes. The latest blog failure just highlighted the importance of a deeper understanding of what it would look like for the community who are less dedicated or less involved in the project so they don't understand all the nuances. Every other major change will be announced gradually, at least we will randomly select a few people from the community, get feedback and move on to more open development. Almost all of our code is open source and we discuss everything publicly with the community, but in terms of the development process, we are not very ready for this. We tried to do this from the very beginning, but it was not very effective, because you need to have a lot more organization and much more carefully planned tasks, you need to analyze these tasks much better. There were not enough people on the team for this, which we eventually resolved, so there is no one-size-fits-all solution to this problem, but we are working on it. Starting with the publication of this year's strategy and an updated roadmap. This will be a great starting point and everything will be public and for the community first.
Now we have several people who are responsible for the development of the community. Basically, even aggregating all of our research and specs was really a challenge for new people, which we finally solved too, and the reason there were big gaps between us and you in the past is because we've been moving so far. too fast. We tried to be first on Rococo and tried to make the first cross-chain call and transfer. I do not want to make excuses, but for us there was too much of everything and we constantly lacked not only developers, but also positions in the community. We have hundreds of channels. I don't mean with the community, but with all other projects. Projects who want to integrate with us, who want to collaborate with us, who want to work with us or join research, join liquidity mining, etc. We had no intention of running away and hiding from everyone, but there was too much in one moment, finally things get better. We have several people who are responsible for specific things like back-end, front-end, community, security, research, project management, hiring, etc.
Jakub Panik - Let me talk about the current state of development. We had to take a step back because we needed to hire more people. Seeing how everything starts to crumble due to several people visiting the crowdlon interface, we realized that we need to create a strong infrastructure and cooperate with other projects in this direction. We've taken a step back, we've started working with Subsquid on data processing to give users the right data so they can use the apps and be fast and productive. It took a lot more effort than we thought, but we decided to do it because the long term thing that will help us in the future is by doing this we prevent all further problems. These kinds of problems would still happen at the very beginning, but to a much lesser extent, we knew that if 5 thousand people came to our application at one moment, then it would simply collapse and no one would use it. We decided to take a step back and rebuild everything from scratch, create the right infrastructure and this can be 90% usable for HydraDX. We don't just do this for Basilisk, it's a reusable fundamental work. And this will significantly reduce the launch of HydraDX.
I don't want to give dates or anything like that, we've made these mistakes before. It won't be as long as some people might think. Don't get us wrong, we've been involved in startups, but most startups don't have any communication with the community, you just talk to your investors and they always agree if you're doing well. For us, this is something new and in the future we will work better in this area.
Jakub Gregus - For HydraDX, Colin created the final Omnipool spec, which has been double-checked and validated by Blockscience. Developers start working on its implementation. We realized that the HydraDX implementation is much simpler than Basilisk, as Jakub already mentioned. Much of the imported middleware will already be re-engineered.
Colin - As far as Omnipool is concerned, we are at the stage where the basic structure is already in place. We are modeling various scenarios, such as how high fees should be and what makes HydraDX attractive in terms of liquidity. The basic structure is already defined and ready to be implemented.
Question - When you talk about adding assets to the Omnipool, I would like to clarify whether this will be done by the technical community or HydraDX management or, for example, through referendums based on decisions made by HDX holders?
Second question - You said that HDX holders will be charged for using the protocol. How will these fees be distributed to HDX token holders? Could you elaborate on what the distribution mechanics will look like, will it be a share of HDX or will the fees be distributed to holders in the form of dividends or something similar? Can you tell more about this?
Colin - We don't have a general idea of what these fees will look like when the network launches. I gave a list of examples of what could be included in the protocol governance process, but the bottom line was that even if we start without a fee distribution, HDX tokens will be used to govern the protocol from the perspective of the substrate, you will have to change a lot when upgrading during execution. HDX token holders will indeed be able to change or raise the fee to distribute protocol fees to HDX token holders. They have the ability to manage the tokenomics and protocol, but it won't happen on the very first day.
Question - What will be the procedure or mechanics of fees? For example, the management decides that it wants to increase the fees for using the protocol and distribute them among HDX token holders. And after performing a runtime upgrade, what will the mechanics look like? In the form of providing liquidity or staking?
Colin - We have yet to work it out. We are looking at something like a Curve distribution.
Jakub Panik - Fees will be collected in some treasury or there may be a secondary treasury and when we have this treasury we can open access to it and people will be able to receive part of these funds.
Jakub Gregus - Fee allocation is a very complex process because you don't want to turn your token into a security and provoke the regulators. Regulators can be very sensitive to this, pure reallocation of commissions is a clear digital analogue of securities, so the regulators will fine us or close our activities. In cryptocurrencies, you can get away with ransom, which is a kind of indirect redistribution or burn model. There is also the Curve Escrow Model, which we see becoming the gold standard in the industry. Many protocols that use these mechanics are reviewed with their lawyers and everything looks fine.
You need to do some work and this work materializes through the right to vote. This is the essence of the Curve model, which for us is the best candidate for implementation. The entire community is already familiar with this model. At least for once we will not shock and mislead people.
Jakub Panik - Regarding the technical committee. We would like to create a technical council because of the regulators. We need to create a committee that can help us add new tokens or maybe we will find the necessary mechanism that can reduce all risks when adding new tokens to Omnipool. This must be resolved first.
Jakub Gregus—Some networks provide Compound services. They measure the liquidity properties of assets and then, based on the results and models, set collateral ratios, interest rates, and so on. I would also like to create an economic council, not just a technical council. The economic council will support issues related to the economy of tokens, what should be the volume ratio, what should be the size of commissions, what part of the commission should be taken, how to use liquidity. Maybe the HydraDX management can redirect some of the liquidity into DOT tokens and participate in crowdlons, and use the rewards from these crowdlons as liquidity for HydraDX and process it. We will support all these and other economic issues with the technical committee, which will be responsible for the technical safety of these assets. I hope that this committee will be made up of the best people in the industry and I will gladly vote for a very good salary for them.
Question - When we move to the mainnet, the fear of fines (slash) will become real in the mainnet. It's not that scary on the testnet, but once the mainnet goes live, it'll be real. I hear a lot of comments on Discord about some validators not getting nominators. Will it lead to fines? Will it be a problem? Will it make people afraid to nominate? Will this reduce the amount of tokens staked? Because we need more validators to strengthen the network. If the validator goes offline, will he be fined?
Jakub Panik - Regarding the last part of the question, you can go offline and update your node and validator. It is not forbidden, on the contrary, it is encouraged. As for the first part of the question, we probably won't have staking as it is now because the shared security of Polkadot and Kusama means we don't need to maintain it. We thought about staking and using it on the mainnet, but we don't need staking at the very beginning. We only need node holders (Collators - Collators), we do not need a certain number of people doing this, as it is happening now. This is a very big problem because we need to figure out what to do with the people who are doing the validation on the testnet right now, but we have a solution that is probably too early to tell, but we want to decentralize the infrastructure. Not only the internal infrastructure, but also the node infrastructure. We may have incentive schemes for node holders supporting our infrastructure, but it's still too early to tell. We'll refine these ideas as the mainnet launches.
Jakub Gregus - In this case, the penalties will not be as hard or even they will be impossible. Worst case, the node will be unavailable, in which case the user will automatically switch to another node. Listening to you, I understand that there is still a lot of confusion in the Polkadot and Substrate community regarding the role of validators and node holders (Collators), because parachain validators are Polkadot validators. As a parachain, you don't need to have your own validators, you need to have collators. It can be 1 node, 10 nodes, 100 nodes, it doesn't matter, but these collator nodes cannot make invalid network state transitions, so their security properties are not as important as the security of the validators. Collators and full nodes are more service providers than POS validators.
Substrate offers a super powerful feature called off-chain worker nodes (off-chain workers) and it can be used to do a lot of useful work that can be done in parallel or done offline, but they still need to have some share of the tokens at stake to ensure proper validation. This data can be added to the network. These nodes will have staking enabled and these nodes can match off-chain transactions, so it doesn't have to be done entirely on the blockchain, it could be like zero knowledge proof to save a lot of space, etc.
Jakub Panik - It could be data providers. The middleware nodes that provide the data, they can have some share of the HDX tokens staked and prove that my data is correct or if it is incorrect they will be penalized.
Question - My question is about your travels and how you feel about the ecosystem as a whole. What is your view on Substrate after working with it for 1 year. Has it become more or less powerful, in your opinion? And how do you see interaction with other teams? I know you mentioned kUSD as a starting point in Omnipool. What are Acala's incentives to encourage HydraDX when they have their own swap functionality? How do you see yourself in the ecosystem and at what level is your spirit and the spirit of your team?
Jakub Gregus - That's a great question. For example, in early 2020 when we started developing on Substrate with HydraDX, we had a lot more concerns about the ecosystem. There were not many parachains, at least not seen or planned. Now this is no longer a concern. On the other hand, I personally do not see a large flow of new parachains, to be honest, this is my opinion. On the other hand, interesting Substrate networks are very specialized and have very unexpected use cases. I'm much more optimistic about Substrate than I've ever been in conversations with many other developers over the past year. Especially talking to other developers from other ecosystems, when I see how it is implemented, how many ideas or parts of Substrate are implemented in other networks. Obviously the Cosmos SDK is more interoperable because it's much older and field proven, it didn't have any stressful changes in 2019 or 2020 so a lot of projects have moved there, it's a more stable development solution. But when you see that Polygon uses Substrate to access data in its network or Octopus Network on Near protocol uses Substrate to connect specific application networks to Near or Compound chooses Substrate.
Also I see that other Cosmos developers have a lot of respect for Substrate and some of them even choose Substrate for their latest projects or Ethereum developers who are developing an ETH 2.0 client are actually moving to Substrate and are very happy and very happy in particular , off-chain worker nodes, and other Substrate features, especially runtime updates.
A few weeks ago I was talking to the guy who organized and did the first hard fork for Cosmos. We agreed how difficult it is to implement this on any network or system, while any Substrate network, be it Polkadot or Kusama or whatever, hard forks just like that, even every 2-3 weeks, it is simply unthinkable by comparison. with other teams and other projects. I see this as one of the underrated features because there were some mantras that the blockchain should be as immutable as possible, but I don't think that's true. Even Bitcoin in 2018 or 2019 had very critical inflation bugs and you wouldn't expect the old underlying code and simpler blockchain to have such critical flaws that should be fixed as soon as possible and they were super critical. After all, you still need people to maintain the software, to maintain the network, to provide power and electricity.
Every network is still just a tool for people to coordinate and get things done, so you can't just disappear and say, "Yeah, it's the same software." No, it needs to be maintained and that's why I think runtime updates are a very elegant way to evolve and evolve into something better over time - it's the coolest and killer feature Substrate has to offer.
In terms of the ecosystem, meeting many founders makes me more calm and confident, especially the developers make me more calm, as I realized that many of them are super talented people who are moving to Substrate or Polkadot from other ecosystems such as Algorand , Cosmos and they are really wonderful people, not only in terms of knowledge and experience, but also as individuals. Obviously, there are people who are not so cool, who do not have honest intentions to build something great, rather to make quick money, but such people are everywhere. I'm much more confident and satisfied ecosystem than before. On the other hand, the ecosystem has many drawbacks because it is one of the most difficult ecosystems to develop, Avalanche and everything that uses the EVM works great. Of course, you can fork anything that lives on Ethereum and it will work great and just use Metamask.
Using all the infrastructure and tools already built helps any developer to be super fast and deploy projects in weeks or months, while Substrate, Polkadot doesn't have a lot of tried and tested code that you can fork and build something on top of. then improve it. Another thing I could mention is the instrumentation issue, since Substrate and Polkadot are changing too fast, which also caused a lot of delays on our end. Many of the tools and infrastructure were outdated and didn't work very well. It's been very hard in 2019, 2020 and 2021, but it's just a pain in the ass for any new ecosystem. Solana last year or 2020 was very similar, it was very difficult to develop and run something on it. There was no code for the fork, there was no toolkit. It had to be created from scratch. This is also a great opportunity for anyone who can see these problems and instead of complaining, just find solutions and ask for a grant from the treasury to implement these solutions.
The treasury at Polkadot and at Kusama has never been as full as it is now. I think this will finally push a lot of people to work in the ecosystem. So there is everything in the ecosystem as a whole, good things and bad things, this applies to every ecosystem. I've talked to many founders from other ecosystems and everyone complained about something. As always. The whole crypto industry is still at a very early stage, even Ethereum is still at a very early stage of development. There are so many things that are left out or not well done. I think we should see this as a great opportunity, not as a hindrance, and provide support to people who are building ecosystems.
We are at a stage where we need to build an ecosystem and we are moving very strongly in this. Now a stage that I didn't expect in 2019 or 2020, there was much more doubt, but now that we see parachains and XCM launched, all these things work and all the FUD from competitors no longer matters. This is no longer the case, the ecosystem is working and now we can finally move on to the Polkadot and Substrate phase, when most of the code has already been written and implemented, developers can only optimize and improve it.
I'm more optimistic than ever, but other projects should be more about working together as a family rather than trying to build everything on their own. Otherwise, there will be no ecosystem, if they so desire, they can go and build their own ecosystem. It would be great to see better collaboration between the teams, but so far it hasn't happened because everyone was too busy developing their own projects. Finally, more teams are getting to the stage where they are trying to integrate, for example Interlay is doing a lot of integrations, Moonriver and Moonbeam are launching XCM, Acala and Astar and a few other top teams are getting to the stage where they are doing integrations with other projects. We are approaching the stage of an ecosystem, not just a collection of individual networks.
Jakub Panik - I had a conflicted relationship with Substrate. First love, then hate, then love again. Right from the start, Substrate allowed us to roll out the blockchain very quickly, which is unprecedented. We've tried using more developers, we've worked with more developers before, but this was the fastest solution we've tried. After that, we had to do something special and it was very difficult because there were a lot of updates and it slowed us down a lot. It was a growing pain, made worse by the need to launch Polkadot and Kusama. Parity and other developers needed to quickly solve problems and quickly launch the network, I understood this and a few times when we had problems, I dived deep into the Substrate code and thought: “Oh shit, there are millions of lines of code in it”, but this doesn't look like bloated code, it's really good and concise code with research and documentation underneath. I think I saw some blog where Gavin Wood wrote that Substrate has 2 million lines of code, which is a huge help if you have a framework that you can use and it's really powerful. You can build on its base, you can change things as you see fit. Deep diving takes a long time because things change, but right now we're talking to Robert Habermeier and the people who make Substrate. Basically, they are done with the basic functionality. It will not change much, so if you want to start creating something, then there is no better time. We know a lot about this, because we started earlier, but you won't There are no problems like ours. I want to inspire all the founders who were confused by the large number of changes that slowed everyone down, this is no longer a problem. Most things are done and ready, if you're thinking about creating something you should do it right now and I think it's really powerful. I don't know the best option for creating a custom blockchain right now. We are constantly looking for other solutions, but everything has its pros and cons, but it's a really good ecosystem with professional developers and support.
Q - You talked about the Web3 Foundation grants you received. How are things going with the Web3 Foundation. Do they provide support for eg Gavin Wood for HydraDX? Does this mean that the whole ecosystem is waiting for Omnipool and sees value in it? What do you think?
Jakub Gregus - I don't know how aware Gavin Wood is of us, probably yes. We don't talk much with the Web3 Foundation, more with Parity. The Web3 Foundation funds and organizes the research, but the research is ultimately implemented by Parity. Parity is a more important player in the ecosystem in a sense, and it would be better for projects to be closer to them. The grant program is useful for newbies who see some opportunities for tool development or if you want to develop something specific that might be useful to others and that will lower your development costs a bit, but you can't rely on grants only if it's something important, like an ETH bridge or a BTC bridge. We had a validator monitoring grant in 2019 but it didn't get renewed for some weird reason, it got us really pissed off at the start and then got us moving forward, we decided to build something that doesn't depend too much on grants. If you want to create something that is missing in the ecosystem, asking for a grant is one of the best ways you can do it, you can also get some guidance from them on how to implement it. Are you talking about the Subaction Grant?
Question - I would like to know if you cooperate with Parity? Do they expect a product from you and see its value for use in the ecosystem?
Jakub Gregus - This organization needs to be very careful what they report from the outside, they need to remain neutral, they don't want a lot of hype or reporting on specific projects so as not to give the impression to the community or retail people that this is the chosen and correct project, where should you invest your money. Sometimes some projects in the Polkadot ecosystem and other ecosystems use this hype or marketing - "Oh, we get a grant from the Web3 Foundation, they support us", which is very doubtful.
Question - I mean this is not a regular DEX or AMM, it has to be a breakthrough technology. Will the other party appreciate your ideas?
Jakub Gregus - I personally think they don't understand how revolutionary or innovative the project is because most of the people who work for the Web3 Foundation or Parity are not DeFi experts. They are experts in cryptography, Rust, back-end development, networking and other areas. So when they see an Omnipool or something like that, they can't tell how it's better than Sushiswap or another XYK model.
Jakub Panik - We work with these people at Parity and we love them not only as developers but as people in general. They are very good guys. I think they have some confidence in our project and what we are doing.
Jakub Gregus - When we ask them to review the code for us, they review it and then tell us that they didn't find any major bugs, only minor cosmetic issues. They tell other colleagues about it and spread a good word about us when they see other solutions for the infrastructure they are also trying to develop. They know how difficult it is with much more resources, and we did it with less. They can only help with what they are familiar with. They helped us when we launched on the Rococo testnet or when we made the first XCM.
Jakub Panik - This is not the best question for us, because me and Jakub, we hate bragging rights. It is better to ask Parity.
Jakub Gregus - Unfortunately, in my opinion, they don't think much about how it's better or how it's different from other projects, they just don't have time to think about it. There is a lot of pressure on everyone at Parity and the Web3 Foundation to have Polkadot and Kusama fully up and running and successful because everyone expects it to be the more successful or one of the most successful ecosystems because Gavin Wood created Ethereum. They have a lot of work.
Q - Do you want to implement fractional, algorithmic pricing for LHDX?
Jakub Gregus - This is an impressive and interesting question. For example, when we started working with Blockscience in December last year. One of their first questions was: “Do we want to make a stablecoin out of LHDX? It would be significant but simplified a lot of things.” We thought it was an impressive idea, but we don't want to make a stablecoin because it has too many nuances. While we were thinking about it, maybe not a stablecoin, but some asset like RAI Reflexer, which would be very impressive. In addition, the problem that can arise with USD stablecoins or other fiat currencies is starting to develop and inflation is a very serious problem. So I'm not sure if stablecoins would be attractive, maybe some Anchor assets associated with them. If you look at the FEI protocol, it's very similar to HydraDX, but they don't have their own DEX or Terra is very similar in some ways, but having a native token as collateral is very dangerous. In May, we saw Terra almost enter a death spiral. For this reason, they are thinking about introducing BTC as another asset to their pool. These ideas are relevant and very interesting. This would allow for the creation of a stablecoin that can be highly scalable. Also, it could have very organic demand as it would stabilize the pool, but right now it's very futuristic. It's possible, but it's not our priority. If anyone wants to work on it, we'll be happy to support and maybe even fund it, but it's not part of our agenda for the day. Let's start HydraDX and Basilisk first. We have a lot of ideas or features that need to be rolled out to the HydraDX and Basilisk network.
...
...
Jakub Gregus - I just want to take a look at last year and thank you all for your exceptional patience. We didn't communicate well and often left you in chaos. This was not done on purpose, no one was hiding, no one was spending your money. Thank you all and we look forward to your support as we finally launch our mainnet!
HydaDX is a multi-headed liquidity omnipool.
AMA SESSION
In the old financial world, liquidity is fragmented among many gatekeepers, protecting their business interests, profiting from the upside, while socializing the downside. The cost of liquidity is also high because legacy technology consists of siloed databases.
Blockchain is changing this unfortunate reality by providing an open and universally accessible technology layer for transaction settlement and asset exchange.
However, the dominant blockchain application platform – Ethereum – suffers from being a general platform for many varied use cases – the network is getting clogged and fees are high.
In order to achieve the optimal properties, HydraDX is built as a parachain – specialized blockchain in the Polkadot network. It is benefiting from shared security, speed and flexibility of the Substrate framework while remaining optimized for a single purpose: enabling fluid programmable value exchange.
Thanks to planned interoperability between Polkadot and Ethereum ecosystems, Hydra will tap into Ethereum’s liquidity, talent and community, merging the best of the both worlds.
1)https://hydradx.substack.com/p/hydradx-omnipool-part-1
Earlier, the HydraDX team published an article on their blog in which they introduced the new LHDX token. This caused a lot of confusion among the community and many people asked the team to provide more details on how this token will be used, will it decrease the value of the original HDX token or increase it? The team decided to hold an AMA session on Discord as soon as possible, we have made a detailed translation of this AMA session for you.
Jakub Gregus — I will try to describe as simply as possible our vision behind the LHDX token and why we had to make these changes, as well as the main problems and concerns associated with the previous design of the HDX token.
Imagine that some liquidity mining whales who often use their capital in various liquidity stimulation programs. Let's imagine that these guys want to have power over the entire HydraDX protocol, whether it's short term or long term, so they can get power in Sushiswap or other protocols through liquidity mining. In our case, it can be even worse, because the security of Proof-Of-Stake protocols is usually at a high level, because if someone wants to buy any POS tokens, it does not matter Polkadot or Kusama, Solana or any other POS blockchain. As more tokens are bought on the market, slippage and fees make any additional purchases more difficult because the price starts to rise a lot.
In fact, due to the design of the Omnipool, there will be no increase in price and some whales may invest a lot of capital in Omnipool, which will immediately increase the overall size of the Omnipool. They have 15,000 BTC and are putting it into the Omnipool. During this time, Omnipool will match the amount of liquidity they have deposited and the pool will be huge, which means that slippage will be practically zero, even on any large trade. Even if they start buying millions of HDX, the price will still not move, which is also bad for HDX holders who want to see some increase in the value of the HDX token over time. It's also bad from a security point of view of the entire protocol, because someone could perform an attack that would be much cheaper than an attack on any other Proof-Of-Stake system, because there would be no restrictions on the purchase of HDX tokens. The main problem is that they can gain control over the protocol. They don't even need to provide a lot of liquidity to the pool because Omnipool can be backed by a lot of different assets. If they are smart and patient, it will be impossible to identify these attempts and distinguish them from normal liquidity mining and they can easily carry out this attack. That's why we thought more deeply about it when we saw the current Curve wars, Magic Internet Money is being used to devastate Anchor's yield resources.
Protocol manipulation attacks can have a very low threshold to carry out, this scares us a lot as we take the overall security of the platform as seriously as possible, which also leads to delays in front-end development on Basilisk, since most of this code, approximately 80% will also be used in HydraDX. We are rethinking things even more seriously than before. We realized that “oh shit, this can be very dangerous” and after that we had long internal discussions about this. Before posting about the introduction of the second token, we expected that this could cause a lot of questions, but no one expected that this would cause a FUD of such force, supposedly we are stealing liquidity from the protocol and leaving for the islands. It was clearly ugly and not even close to the truth. That's the story, and I hope everyone understands why we're going to use this protocol design and why it's actually bullish, even for HDX. Thus, the HDX token will have a very strong position in the protocol. So we can move on to questions.
lolmschizz - Governance in Substrate is not the same as governance in any other Ethereum protocol. Almost everything can be changed, it does not matter who will implement it.
Jakub Gregus - For example, dYdX governance token or OpenSea governance token. They don't have much power over the protocols. The dYdX team has been pretty clear about the process of managing dYdX and voting with dYdX, and they were quick to point out that you won't be able to enable sharing of commissions earned by the dYdX protocol. On the other hand, on Substrate, you can choose to switch the network connection from Polkadot to Cosmos, whatever the token holders desire. Someone will prepare the necessary code for this, and the token holders vote for it, and it's not difficult at all if you really want to do it. On the other hand, the runtime update is a very powerful and useful tool and should be taken seriously because it can change absolutely everything.
lolmcshizz - A question a council member asked us. What is the use and the potential value of HDX if we move to a two-token model? What criteria will be used to increase the value of HDX?
What is the advantage of these two tokens? What exactly will be possible to do with HDX?
Jakub Gregus - Yes, exactly. It will be much more difficult to achieve the appreciation and value of HDX tokens if the protocol is backed by billions of assets, which is what we have been aiming for since day one, it will be almost impossible to push the price higher because slippage will be zero even if we trade millions of HDX simultaneously.
Colin - Whatever the pool token, it can be anything, in our case it is LHDX, whatever this central liquidity token is, its supply can be changed very drastically. Therefore, our main concern was that if the protocol succeeds and a lot of liquidity inflows, and HDX is the central liquidity token, each HDX token and its supply will increase greatly, which will dilute the supply of tokens from those participants who hold HDX. Thus, millions of HDX tokens are issued and dilute the total supply - this can be a big hurdle. At the same time, the second LHDX token allows us to take on the volatility of the supply, which gives us a stronger token economy. I would say that HDX and LHDX are tokens that are in a zero-sum game, and with the right design, it can be positive. We need to think about the right combination and features for these two tokens and what system will work well. Here are some applications of the HDX token in protocol governance: if the system works well, the ability to decide which token will be added to this protocol or where exactly the fees will be distributed. The value of such management should certainly increase.
Jakub Gregus - Exactly, the whole history of the market, I was against governance tokens, I really hate them. In my opinion, it's really unfair to the community to give some tokens that are meant to govern the protocol, but in reality, they don't have the power to change anything. We see such cases with OpenSea or dYdX tokens. For HydraDX, this is a completely different scenario, the community can really change everything and choose where to extract value and where not. This should also be decided by the community and researched in every possible way on how best to capitalize on all of HDX's infrastructure and liquidity.
Q - Was the decision to redesign the token entirely in light of recent developments in the liquidity war? Does this sound like what is happening now where people are really trying to take control of the governance tokens because they actually decide where the liquidity goes? Is this to make sure that no group of whales can get hold of a lot of control tokens and have fun protocol games?
Jakub Gregus - In fact, yes, for us it was one of the bells.
Question - Do all these arguments of the community members about the LHDX token have ground under them - “Oh no, the second token appeared out of nowhere and the team was silent all this time, they probably want to deceive us”? As for me, this is an obvious solution to keep the system secure in the long run.
Jakub Gregus - Yes, we thought about it from the very beginning, but Curve Wars and other control token hacks were the last wake-up call for us. We decided for ourselves, “We can’t let this take its course and hope that this story does not affect us.” Huge money will be at stake, and there will be people who will try to cheat the system.
Jakub Panik - Perhaps I could add something from myself. We thought about these problems from the very beginning and we had some solutions to solve them. For example, the introduction of fees or a slight change in the Omnipool model, but each decision introduces serious problems in the design of the protocol. For example, if we change the design of Omnipool, then computing transactions on the network becomes too complex. This will become less efficient for trading. The second solution would interfere with the oracles, because it would create a huge spread on HDX. So we thought about limiting the purchase of HDX from the pool, but that wasn't the ideal solution and we felt it was the only good solution. In the end, the two-token solution turned out to be the best, because there were restrictions on the HDX supply that the previous design did not have.
Jakub Gregus - We spent hundreds of hours creating it with Blockscience and eventually came up with this mechanic. They have a very deep understanding of governance tokens. Blockscience actually simulated an attack on Gitcoin.
Question - I have a question about adding new tokens. Do you have an idea about the conditions for adding new tokens? Are we going to focus on big tokens at the very beginning like DOT, ETH? Will we also give smaller tokens a chance and how often do you plan to add new tokens? Rech Is it about one token per week, ten tokens per week?
Jakub Gregus — Yes, I've been thinking about it a lot lately. We can focus on major tokens and blue chips that are included in some indexes like DPI, MakerDAO, Compound, Aave. All tokens that pass security checks. They have internal security audit teams that dive very deep into all the possible risks associated with these tokens. All of them are fairly secure, then all L1 tokens that could be transferred to the bridge, and most of the legitimate projects from them. I can imagine that some legitimate teams with new projects can be listed faster. For example, a team at the level of Acala or Interlay appears in the ecosystem and we will see that they do an excellent job in all directions. They have security audits and there is little reason for us to wait a year or more for them to be listed. Obviously, we will also have certain volume and liquidity requirements.
Jakub Panik - We can also use Basilisk to assess whether assets have enough volume or are they safe enough for us. In the future, we are thinking about mechanisms that will allow us to include even less secure assets in Omnipool, but they will be limited by liquidity, so they will not pose a big risk to other assets.
Jakub Gregus - The key will be to combine multiple oracles. Chainlink has been field tested, but not everyone agrees. We can combine several other oracles and sources.
Question - The whole DeFi market has become quite crowded over the last year, various DEXs, various networks. What is your marketing strategy for generating sustainable liquidity on HydraDX? Omnipool seems like a pretty tidbit to hack, how do you rate Omnipool's liquidity drain risk and what are your risk mitigation measures?
Jakub Panik - The first thing to do is not to accept assets that are not safe, but we are also thinking of a few measures:
1 - Fees that are based on volume, if there is a huge pressure on one token, then we increase the fee to infinity so that traders cannot withdraw all liquidity in a short time
2 - We're thinking about an emergency stop button, this process will put it to a popular vote if something really bad happens, we should have that emergency measure in place to be able to stop trading if enough people decide to do so. I don't know if other projects have this feature.
Jakub Gregus - It's called "automatic fuse". I also have a vision for how to make decentralized storage more secure. I have an Entropy project in my field of vision, this project is one of the coolest Substrate, which will make trading much more secure, because all assets will be stored elsewhere, perhaps on Statemine or Statemint, DOTsama DeFi will use the rights to them and there will be some withdrawal period to have some buffer if any hack or any miscalculation or error in the code happens, not only in HydraDX or Basilisk, but also in bridges, XCM and other network components. And then, obviously, the most standard measures, such as security audits. We are trying to attract companies that specialize in this, but they have a lot of work in various ecosystems, mainly in Ethereum.
The first question was about the strategy that will help us succeed. In the short term, it will be liquidity mining initiatives that will be subsidized not only by us, but by us and other projects in the Dotsama ecosystem that will indeed offer their own incentives, so there will be incentives for a large influx of users into the ecosystem. There is also some talk of incentivizing the initial launch of ecosystem-wide liquidity through the Kusama and Polkadot treasuries. In fact, the Polkadot Treasury is the best candidate for this. The Polkadot ecosystem is far behind other ecosystems, so we need to put in as much effort as possible on every front, and large incentive programs are being successfully used by most of the alternative ecosystems. So, we have no other way to avoid this.
So, in the medium/long term, bonds that are a bit like Olympus bonds can be a great solution, but ours will be different, they will be similar to traditional bonds. In the case of Olympus, you have bonds with a maturity of 5 days, in fact we will be holding a kind of auction that will encourage bonds with the maximum possible maturity. There may be people who decide to lock their liquidity in HydraDX even for years. I would probably do the same, it will allow us to offer the highest APY we have in the secondary bond market. You can use these bonds as collateral on Angular or maybe other markets because bonds are much more effective at maintaining liquidity than liquidity initiatives. Most of them are very wasteful, this is another observation from the liquidity wars where you have invested billions of dollars in some pools, while these pools generate only a few thousand dollars of commission per day or their volume is much less, so the commissions are very small, so you don't need to incentivize pools to be so bloated.
But what you really need is to incentivize its use, so we will introduce incentives for trading volume, that makes a lot more sense, there is nothing unique or revolutionary about it, because all CEX, I don't mean only crypto CEX, but also stock exchanges or any exchanges in the world use this throughout the history of trading, incentivizing market makers or other parties by offering them discounts on commissions or negative discounts, they really make money on volumes, because if you have liquidity pools with billions of assets , which are just there and no one uses them for trade or exchange, this is not profitable and even wasteful.
And it's not just a look at AMM incentivization, but it's also incentivizing integrations to connect to HydraDX and Basilisk, you allow users to pay fees in the destination protocol they want to pay. For example, users of some wallet need to call a smart contract on Astar or ETH or somewhere else, but they do not want to pay a fee in more scarce assets. They want a predictable fee for this, so HydraDX and Basilisk can provide this service on the Back-end. Users will be able to pay fees in stablecoins whenever they want and using any wallet that will be integrated into our SDK. This is one of the first and most obvious use cases that we will have to finish and prepare on our end. We have seen that AMMs are perfect for these use cases, improving the experience in the cryptosphere, so this is a clear incentive for other application developers to connect with us. The second thing is moving our design forward and improving it over time, implementing some of the other features that we have in development. I've even compiled a short list of features that we'll be offering from the early days of HydraDX that have been implemented in other protocols by other teams. We can't wait to put everything together to maximize the efficiency of DeFi capital. We will release an updated roadmap with detailed bullet points that will
Jakub Gregus — We had communication problems for almost our entire history, because until now, we were a very small team of 6 people who managed the entire project, tested and integrated all the new products from Polkadot, Substrate and Parity. We've been doing all this insanely complex research, but fortunately, we've filled in all the missing positions in our organization that we've only recently realized, to our regret. We were too deeply immersed in research or development. For most of November and December, we were looking for people who could fill the holes in the organization, not only in the position of developers, but also in the field of communication, organization, operations and in general, those people who could provide a more professional approach to everything, because very soon We are already planning to launch the project.
All communication with the community should be more subtle and more careful about all changes. The latest blog failure just highlighted the importance of a deeper understanding of what it would look like for the community who are less dedicated or less involved in the project so they don't understand all the nuances. Every other major change will be announced gradually, at least we will randomly select a few people from the community, get feedback and move on to more open development. Almost all of our code is open source and we discuss everything publicly with the community, but in terms of the development process, we are not very ready for this. We tried to do this from the very beginning, but it was not very effective, because you need to have a lot more organization and much more carefully planned tasks, you need to analyze these tasks much better. There were not enough people on the team for this, which we eventually resolved, so there is no one-size-fits-all solution to this problem, but we are working on it. Starting with the publication of this year's strategy and an updated roadmap. This will be a great starting point and everything will be public and for the community first.
Now we have several people who are responsible for the development of the community. Basically, even aggregating all of our research and specs was really a challenge for new people, which we finally solved too, and the reason there were big gaps between us and you in the past is because we've been moving so far. too fast. We tried to be first on Rococo and tried to make the first cross-chain call and transfer. I do not want to make excuses, but for us there was too much of everything and we constantly lacked not only developers, but also positions in the community. We have hundreds of channels. I don't mean with the community, but with all other projects. Projects who want to integrate with us, who want to collaborate with us, who want to work with us or join research, join liquidity mining, etc. We had no intention of running away and hiding from everyone, but there was too much in one moment, finally things get better. We have several people who are responsible for specific things like back-end, front-end, community, security, research, project management, hiring, etc.
Jakub Panik - Let me talk about the current state of development. We had to take a step back because we needed to hire more people. Seeing how everything starts to crumble due to several people visiting the crowdlon interface, we realized that we need to create a strong infrastructure and cooperate with other projects in this direction. We've taken a step back, we've started working with Subsquid on data processing to give users the right data so they can use the apps and be fast and productive. It took a lot more effort than we thought, but we decided to do it because the long term thing that will help us in the future is by doing this we prevent all further problems. These kinds of problems would still happen at the very beginning, but to a much lesser extent, we knew that if 5 thousand people came to our application at one moment, then it would simply collapse and no one would use it. We decided to take a step back and rebuild everything from scratch, create the right infrastructure and this can be 90% usable for HydraDX. We don't just do this for Basilisk, it's a reusable fundamental work. And this will significantly reduce the launch of HydraDX.
I don't want to give dates or anything like that, we've made these mistakes before. It won't be as long as some people might think. Don't get us wrong, we've been involved in startups, but most startups don't have any communication with the community, you just talk to your investors and they always agree if you're doing well. For us, this is something new and in the future we will work better in this area.
Jakub Gregus - For HydraDX, Colin created the final Omnipool spec, which has been double-checked and validated by Blockscience. Developers start working on its implementation. We realized that the HydraDX implementation is much simpler than Basilisk, as Jakub already mentioned. Much of the imported middleware will already be re-engineered.
Colin - As far as Omnipool is concerned, we are at the stage where the basic structure is already in place. We are modeling various scenarios, such as how high fees should be and what makes HydraDX attractive in terms of liquidity. The basic structure is already defined and ready to be implemented.
Question - When you talk about adding assets to the Omnipool, I would like to clarify whether this will be done by the technical community or HydraDX management or, for example, through referendums based on decisions made by HDX holders?
Second question - You said that HDX holders will be charged for using the protocol. How will these fees be distributed to HDX token holders? Could you elaborate on what the distribution mechanics will look like, will it be a share of HDX or will the fees be distributed to holders in the form of dividends or something similar? Can you tell more about this?
Colin - We don't have a general idea of what these fees will look like when the network launches. I gave a list of examples of what could be included in the protocol governance process, but the bottom line was that even if we start without a fee distribution, HDX tokens will be used to govern the protocol from the perspective of the substrate, you will have to change a lot when upgrading during execution. HDX token holders will indeed be able to change or raise the fee to distribute protocol fees to HDX token holders. They have the ability to manage the tokenomics and protocol, but it won't happen on the very first day.
Question - What will be the procedure or mechanics of fees? For example, the management decides that it wants to increase the fees for using the protocol and distribute them among HDX token holders. And after performing a runtime upgrade, what will the mechanics look like? In the form of providing liquidity or staking?
Colin - We have yet to work it out. We are looking at something like a Curve distribution.
Jakub Panik - Fees will be collected in some treasury or there may be a secondary treasury and when we have this treasury we can open access to it and people will be able to receive part of these funds.
Jakub Gregus - Fee allocation is a very complex process because you don't want to turn your token into a security and provoke the regulators. Regulators can be very sensitive to this, pure reallocation of commissions is a clear digital analogue of securities, so the regulators will fine us or close our activities. In cryptocurrencies, you can get away with ransom, which is a kind of indirect redistribution or burn model. There is also the Curve Escrow Model, which we see becoming the gold standard in the industry. Many protocols that use these mechanics are reviewed with their lawyers and everything looks fine.
You need to do some work and this work materializes through the right to vote. This is the essence of the Curve model, which for us is the best candidate for implementation. The entire community is already familiar with this model. At least for once we will not shock and mislead people.
Jakub Panik - Regarding the technical committee. We would like to create a technical council because of the regulators. We need to create a committee that can help us add new tokens or maybe we will find the necessary mechanism that can reduce all risks when adding new tokens to Omnipool. This must be resolved first.
Jakub Gregus—Some networks provide Compound services. They measure the liquidity properties of assets and then, based on the results and models, set collateral ratios, interest rates, and so on. I would also like to create an economic council, not just a technical council. The economic council will support issues related to the economy of tokens, what should be the volume ratio, what should be the size of commissions, what part of the commission should be taken, how to use liquidity. Maybe the HydraDX management can redirect some of the liquidity into DOT tokens and participate in crowdlons, and use the rewards from these crowdlons as liquidity for HydraDX and process it. We will support all these and other economic issues with the technical committee, which will be responsible for the technical safety of these assets. I hope that this committee will be made up of the best people in the industry and I will gladly vote for a very good salary for them.
Question - When we move to the mainnet, the fear of fines (slash) will become real in the mainnet. It's not that scary on the testnet, but once the mainnet goes live, it'll be real. I hear a lot of comments on Discord about some validators not getting nominators. Will it lead to fines? Will it be a problem? Will it make people afraid to nominate? Will this reduce the amount of tokens staked? Because we need more validators to strengthen the network. If the validator goes offline, will he be fined?
Jakub Panik - Regarding the last part of the question, you can go offline and update your node and validator. It is not forbidden, on the contrary, it is encouraged. As for the first part of the question, we probably won't have staking as it is now because the shared security of Polkadot and Kusama means we don't need to maintain it. We thought about staking and using it on the mainnet, but we don't need staking at the very beginning. We only need node holders (Collators - Collators), we do not need a certain number of people doing this, as it is happening now. This is a very big problem because we need to figure out what to do with the people who are doing the validation on the testnet right now, but we have a solution that is probably too early to tell, but we want to decentralize the infrastructure. Not only the internal infrastructure, but also the node infrastructure. We may have incentive schemes for node holders supporting our infrastructure, but it's still too early to tell. We'll refine these ideas as the mainnet launches.
Jakub Gregus - In this case, the penalties will not be as hard or even they will be impossible. Worst case, the node will be unavailable, in which case the user will automatically switch to another node. Listening to you, I understand that there is still a lot of confusion in the Polkadot and Substrate community regarding the role of validators and node holders (Collators), because parachain validators are Polkadot validators. As a parachain, you don't need to have your own validators, you need to have collators. It can be 1 node, 10 nodes, 100 nodes, it doesn't matter, but these collator nodes cannot make invalid network state transitions, so their security properties are not as important as the security of the validators. Collators and full nodes are more service providers than POS validators.
Substrate offers a super powerful feature called off-chain worker nodes (off-chain workers) and it can be used to do a lot of useful work that can be done in parallel or done offline, but they still need to have some share of the tokens at stake to ensure proper validation. This data can be added to the network. These nodes will have staking enabled and these nodes can match off-chain transactions, so it doesn't have to be done entirely on the blockchain, it could be like zero knowledge proof to save a lot of space, etc.
Jakub Panik - It could be data providers. The middleware nodes that provide the data, they can have some share of the HDX tokens staked and prove that my data is correct or if it is incorrect they will be penalized.
Question - My question is about your travels and how you feel about the ecosystem as a whole. What is your view on Substrate after working with it for 1 year. Has it become more or less powerful, in your opinion? And how do you see interaction with other teams? I know you mentioned kUSD as a starting point in Omnipool. What are Acala's incentives to encourage HydraDX when they have their own swap functionality? How do you see yourself in the ecosystem and at what level is your spirit and the spirit of your team?
Jakub Gregus - That's a great question. For example, in early 2020 when we started developing on Substrate with HydraDX, we had a lot more concerns about the ecosystem. There were not many parachains, at least not seen or planned. Now this is no longer a concern. On the other hand, I personally do not see a large flow of new parachains, to be honest, this is my opinion. On the other hand, interesting Substrate networks are very specialized and have very unexpected use cases. I'm much more optimistic about Substrate than I've ever been in conversations with many other developers over the past year. Especially talking to other developers from other ecosystems, when I see how it is implemented, how many ideas or parts of Substrate are implemented in other networks. Obviously the Cosmos SDK is more interoperable because it's much older and field proven, it didn't have any stressful changes in 2019 or 2020 so a lot of projects have moved there, it's a more stable development solution. But when you see that Polygon uses Substrate to access data in its network or Octopus Network on Near protocol uses Substrate to connect specific application networks to Near or Compound chooses Substrate.
Also I see that other Cosmos developers have a lot of respect for Substrate and some of them even choose Substrate for their latest projects or Ethereum developers who are developing an ETH 2.0 client are actually moving to Substrate and are very happy and very happy in particular , off-chain worker nodes, and other Substrate features, especially runtime updates.
A few weeks ago I was talking to the guy who organized and did the first hard fork for Cosmos. We agreed how difficult it is to implement this on any network or system, while any Substrate network, be it Polkadot or Kusama or whatever, hard forks just like that, even every 2-3 weeks, it is simply unthinkable by comparison. with other teams and other projects. I see this as one of the underrated features because there were some mantras that the blockchain should be as immutable as possible, but I don't think that's true. Even Bitcoin in 2018 or 2019 had very critical inflation bugs and you wouldn't expect the old underlying code and simpler blockchain to have such critical flaws that should be fixed as soon as possible and they were super critical. After all, you still need people to maintain the software, to maintain the network, to provide power and electricity.
Every network is still just a tool for people to coordinate and get things done, so you can't just disappear and say, "Yeah, it's the same software." No, it needs to be maintained and that's why I think runtime updates are a very elegant way to evolve and evolve into something better over time - it's the coolest and killer feature Substrate has to offer.
In terms of the ecosystem, meeting many founders makes me more calm and confident, especially the developers make me more calm, as I realized that many of them are super talented people who are moving to Substrate or Polkadot from other ecosystems such as Algorand , Cosmos and they are really wonderful people, not only in terms of knowledge and experience, but also as individuals. Obviously, there are people who are not so cool, who do not have honest intentions to build something great, rather to make quick money, but such people are everywhere. I'm much more confident and satisfied ecosystem than before. On the other hand, the ecosystem has many drawbacks because it is one of the most difficult ecosystems to develop, Avalanche and everything that uses the EVM works great. Of course, you can fork anything that lives on Ethereum and it will work great and just use Metamask.
Using all the infrastructure and tools already built helps any developer to be super fast and deploy projects in weeks or months, while Substrate, Polkadot doesn't have a lot of tried and tested code that you can fork and build something on top of. then improve it. Another thing I could mention is the instrumentation issue, since Substrate and Polkadot are changing too fast, which also caused a lot of delays on our end. Many of the tools and infrastructure were outdated and didn't work very well. It's been very hard in 2019, 2020 and 2021, but it's just a pain in the ass for any new ecosystem. Solana last year or 2020 was very similar, it was very difficult to develop and run something on it. There was no code for the fork, there was no toolkit. It had to be created from scratch. This is also a great opportunity for anyone who can see these problems and instead of complaining, just find solutions and ask for a grant from the treasury to implement these solutions.
The treasury at Polkadot and at Kusama has never been as full as it is now. I think this will finally push a lot of people to work in the ecosystem. So there is everything in the ecosystem as a whole, good things and bad things, this applies to every ecosystem. I've talked to many founders from other ecosystems and everyone complained about something. As always. The whole crypto industry is still at a very early stage, even Ethereum is still at a very early stage of development. There are so many things that are left out or not well done. I think we should see this as a great opportunity, not as a hindrance, and provide support to people who are building ecosystems.
We are at a stage where we need to build an ecosystem and we are moving very strongly in this. Now a stage that I didn't expect in 2019 or 2020, there was much more doubt, but now that we see parachains and XCM launched, all these things work and all the FUD from competitors no longer matters. This is no longer the case, the ecosystem is working and now we can finally move on to the Polkadot and Substrate phase, when most of the code has already been written and implemented, developers can only optimize and improve it.
I'm more optimistic than ever, but other projects should be more about working together as a family rather than trying to build everything on their own. Otherwise, there will be no ecosystem, if they so desire, they can go and build their own ecosystem. It would be great to see better collaboration between the teams, but so far it hasn't happened because everyone was too busy developing their own projects. Finally, more teams are getting to the stage where they are trying to integrate, for example Interlay is doing a lot of integrations, Moonriver and Moonbeam are launching XCM, Acala and Astar and a few other top teams are getting to the stage where they are doing integrations with other projects. We are approaching the stage of an ecosystem, not just a collection of individual networks.
Jakub Panik - I had a conflicted relationship with Substrate. First love, then hate, then love again. Right from the start, Substrate allowed us to roll out the blockchain very quickly, which is unprecedented. We've tried using more developers, we've worked with more developers before, but this was the fastest solution we've tried. After that, we had to do something special and it was very difficult because there were a lot of updates and it slowed us down a lot. It was a growing pain, made worse by the need to launch Polkadot and Kusama. Parity and other developers needed to quickly solve problems and quickly launch the network, I understood this and a few times when we had problems, I dived deep into the Substrate code and thought: “Oh shit, there are millions of lines of code in it”, but this doesn't look like bloated code, it's really good and concise code with research and documentation underneath. I think I saw some blog where Gavin Wood wrote that Substrate has 2 million lines of code, which is a huge help if you have a framework that you can use and it's really powerful. You can build on its base, you can change things as you see fit. Deep diving takes a long time because things change, but right now we're talking to Robert Habermeier and the people who make Substrate. Basically, they are done with the basic functionality. It will not change much, so if you want to start creating something, then there is no better time. We know a lot about this, because we started earlier, but you won't There are no problems like ours. I want to inspire all the founders who were confused by the large number of changes that slowed everyone down, this is no longer a problem. Most things are done and ready, if you're thinking about creating something you should do it right now and I think it's really powerful. I don't know the best option for creating a custom blockchain right now. We are constantly looking for other solutions, but everything has its pros and cons, but it's a really good ecosystem with professional developers and support.
Q - You talked about the Web3 Foundation grants you received. How are things going with the Web3 Foundation. Do they provide support for eg Gavin Wood for HydraDX? Does this mean that the whole ecosystem is waiting for Omnipool and sees value in it? What do you think?
Jakub Gregus - I don't know how aware Gavin Wood is of us, probably yes. We don't talk much with the Web3 Foundation, more with Parity. The Web3 Foundation funds and organizes the research, but the research is ultimately implemented by Parity. Parity is a more important player in the ecosystem in a sense, and it would be better for projects to be closer to them. The grant program is useful for newbies who see some opportunities for tool development or if you want to develop something specific that might be useful to others and that will lower your development costs a bit, but you can't rely on grants only if it's something important, like an ETH bridge or a BTC bridge. We had a validator monitoring grant in 2019 but it didn't get renewed for some weird reason, it got us really pissed off at the start and then got us moving forward, we decided to build something that doesn't depend too much on grants. If you want to create something that is missing in the ecosystem, asking for a grant is one of the best ways you can do it, you can also get some guidance from them on how to implement it. Are you talking about the Subaction Grant?
Question - I would like to know if you cooperate with Parity? Do they expect a product from you and see its value for use in the ecosystem?
Jakub Gregus - This organization needs to be very careful what they report from the outside, they need to remain neutral, they don't want a lot of hype or reporting on specific projects so as not to give the impression to the community or retail people that this is the chosen and correct project, where should you invest your money. Sometimes some projects in the Polkadot ecosystem and other ecosystems use this hype or marketing - "Oh, we get a grant from the Web3 Foundation, they support us", which is very doubtful.
Question - I mean this is not a regular DEX or AMM, it has to be a breakthrough technology. Will the other party appreciate your ideas?
Jakub Gregus - I personally think they don't understand how revolutionary or innovative the project is because most of the people who work for the Web3 Foundation or Parity are not DeFi experts. They are experts in cryptography, Rust, back-end development, networking and other areas. So when they see an Omnipool or something like that, they can't tell how it's better than Sushiswap or another XYK model.
Jakub Panik - We work with these people at Parity and we love them not only as developers but as people in general. They are very good guys. I think they have some confidence in our project and what we are doing.
Jakub Gregus - When we ask them to review the code for us, they review it and then tell us that they didn't find any major bugs, only minor cosmetic issues. They tell other colleagues about it and spread a good word about us when they see other solutions for the infrastructure they are also trying to develop. They know how difficult it is with much more resources, and we did it with less. They can only help with what they are familiar with. They helped us when we launched on the Rococo testnet or when we made the first XCM.
Jakub Panik - This is not the best question for us, because me and Jakub, we hate bragging rights. It is better to ask Parity.
Jakub Gregus - Unfortunately, in my opinion, they don't think much about how it's better or how it's different from other projects, they just don't have time to think about it. There is a lot of pressure on everyone at Parity and the Web3 Foundation to have Polkadot and Kusama fully up and running and successful because everyone expects it to be the more successful or one of the most successful ecosystems because Gavin Wood created Ethereum. They have a lot of work.
Q - Do you want to implement fractional, algorithmic pricing for LHDX?
Jakub Gregus - This is an impressive and interesting question. For example, when we started working with Blockscience in December last year. One of their first questions was: “Do we want to make a stablecoin out of LHDX? It would be significant but simplified a lot of things.” We thought it was an impressive idea, but we don't want to make a stablecoin because it has too many nuances. While we were thinking about it, maybe not a stablecoin, but some asset like RAI Reflexer, which would be very impressive. In addition, the problem that can arise with USD stablecoins or other fiat currencies is starting to develop and inflation is a very serious problem. So I'm not sure if stablecoins would be attractive, maybe some Anchor assets associated with them. If you look at the FEI protocol, it's very similar to HydraDX, but they don't have their own DEX or Terra is very similar in some ways, but having a native token as collateral is very dangerous. In May, we saw Terra almost enter a death spiral. For this reason, they are thinking about introducing BTC as another asset to their pool. These ideas are relevant and very interesting. This would allow for the creation of a stablecoin that can be highly scalable. Also, it could have very organic demand as it would stabilize the pool, but right now it's very futuristic. It's possible, but it's not our priority. If anyone wants to work on it, we'll be happy to support and maybe even fund it, but it's not part of our agenda for the day. Let's start HydraDX and Basilisk first. We have a lot of ideas or features that need to be rolled out to the HydraDX and Basilisk network.
...
Jakub Gregus - I just want to take a look at last year and thank you all for your exceptional patience. We didn't communicate well and often left you in chaos. This was not done on purpose, no one was hiding, no one was spending your money. Thank you all and we look forward to your support as we finally launch our mainnet!
In the old financial world, liquidity is fragmented among many gatekeepers, protecting their business interests, profiting from the upside, while socializing the downside. The cost of liquidity is also high because legacy technology consists of siloed databases.
Blockchain is changing this unfortunate reality by providing an open and universally accessible technology layer for transaction settlement and asset exchange.
However, the dominant blockchain application platform – Ethereum – suffers from being a general platform for many varied use cases – the network is getting clogged and fees are high.
In order to achieve the optimal properties, HydraDX is built as a parachain – specialized blockchain in the Polkadot network. It is benefiting from shared security, speed and flexibility of the Substrate framework while remaining optimized for a single purpose: enabling fluid programmable value exchange.
Thanks to planned interoperability between Polkadot and Ethereum ecosystems, Hydra will tap into Ethereum’s liquidity, talent and community, merging the best of the both worlds.
1)https://hydradx.substack.com/p/hydradx-omnipool-part-1
Earlier, the HydraDX team published an article on their blog in which they introduced the new LHDX token. This caused a lot of confusion among the community and many people asked the team to provide more details on how this token will be used, will it decrease the value of the original HDX token or increase it? The team decided to hold an AMA session on Discord as soon as possible, we have made a detailed translation of this AMA session for you.
Jakub Gregus — I will try to describe as simply as possible our vision behind the LHDX token and why we had to make these changes, as well as the main problems and concerns associated with the previous design of the HDX token.
Imagine that some liquidity mining whales who often use their capital in various liquidity stimulation programs. Let's imagine that these guys want to have power over the entire HydraDX protocol, whether it's short term or long term, so they can get power in Sushiswap or other protocols through liquidity mining. In our case, it can be even worse, because the security of Proof-Of-Stake protocols is usually at a high level, because if someone wants to buy any POS tokens, it does not matter Polkadot or Kusama, Solana or any other POS blockchain. As more tokens are bought on the market, slippage and fees make any additional purchases more difficult because the price starts to rise a lot.
In fact, due to the design of the Omnipool, there will be no increase in price and some whales may invest a lot of capital in Omnipool, which will immediately increase the overall size of the Omnipool. They have 15,000 BTC and are putting it into the Omnipool. During this time, Omnipool will match the amount of liquidity they have deposited and the pool will be huge, which means that slippage will be practically zero, even on any large trade. Even if they start buying millions of HDX, the price will still not move, which is also bad for HDX holders who want to see some increase in the value of the HDX token over time. It's also bad from a security point of view of the entire protocol, because someone could perform an attack that would be much cheaper than an attack on any other Proof-Of-Stake system, because there would be no restrictions on the purchase of HDX tokens. The main problem is that they can gain control over the protocol. They don't even need to provide a lot of liquidity to the pool because Omnipool can be backed by a lot of different assets. If they are smart and patient, it will be impossible to identify these attempts and distinguish them from normal liquidity mining and they can easily carry out this attack. That's why we thought more deeply about it when we saw the current Curve wars, Magic Internet Money is being used to devastate Anchor's yield resources.
Protocol manipulation attacks can have a very low threshold to carry out, this scares us a lot as we take the overall security of the platform as seriously as possible, which also leads to delays in front-end development on Basilisk, since most of this code, approximately 80% will also be used in HydraDX. We are rethinking things even more seriously than before. We realized that “oh shit, this can be very dangerous” and after that we had long internal discussions about this. Before posting about the introduction of the second token, we expected that this could cause a lot of questions, but no one expected that this would cause a FUD of such force, supposedly we are stealing liquidity from the protocol and leaving for the islands. It was clearly ugly and not even close to the truth. That's the story, and I hope everyone understands why we're going to use this protocol design and why it's actually bullish, even for HDX. Thus, the HDX token will have a very strong position in the protocol. So we can move on to questions.
lolmschizz - Governance in Substrate is not the same as governance in any other Ethereum protocol. Almost everything can be changed, it does not matter who will implement it.
Jakub Gregus - For example, dYdX governance token or OpenSea governance token. They don't have much power over the protocols. The dYdX team has been pretty clear about the process of managing dYdX and voting with dYdX, and they were quick to point out that you won't be able to enable sharing of commissions earned by the dYdX protocol. On the other hand, on Substrate, you can choose to switch the network connection from Polkadot to Cosmos, whatever the token holders desire. Someone will prepare the necessary code for this, and the token holders vote for it, and it's not difficult at all if you really want to do it. On the other hand, the runtime update is a very powerful and useful tool and should be taken seriously because it can change absolutely everything.
lolmcshizz - A question a council member asked us. What is the use and the potential value of HDX if we move to a two-token model? What criteria will be used to increase the value of HDX?
What is the advantage of these two tokens? What exactly will be possible to do with HDX?
Jakub Gregus - Yes, exactly. It will be much more difficult to achieve the appreciation and value of HDX tokens if the protocol is backed by billions of assets, which is what we have been aiming for since day one, it will be almost impossible to push the price higher because slippage will be zero even if we trade millions of HDX simultaneously.
Colin - Whatever the pool token, it can be anything, in our case it is LHDX, whatever this central liquidity token is, its supply can be changed very drastically. Therefore, our main concern was that if the protocol succeeds and a lot of liquidity inflows, and HDX is the central liquidity token, each HDX token and its supply will increase greatly, which will dilute the supply of tokens from those participants who hold HDX. Thus, millions of HDX tokens are issued and dilute the total supply - this can be a big hurdle. At the same time, the second LHDX token allows us to take on the volatility of the supply, which gives us a stronger token economy. I would say that HDX and LHDX are tokens that are in a zero-sum game, and with the right design, it can be positive. We need to think about the right combination and features for these two tokens and what system will work well. Here are some applications of the HDX token in protocol governance: if the system works well, the ability to decide which token will be added to this protocol or where exactly the fees will be distributed. The value of such management should certainly increase.
Jakub Gregus - Exactly, the whole history of the market, I was against governance tokens, I really hate them. In my opinion, it's really unfair to the community to give some tokens that are meant to govern the protocol, but in reality, they don't have the power to change anything. We see such cases with OpenSea or dYdX tokens. For HydraDX, this is a completely different scenario, the community can really change everything and choose where to extract value and where not. This should also be decided by the community and researched in every possible way on how best to capitalize on all of HDX's infrastructure and liquidity.
Q - Was the decision to redesign the token entirely in light of recent developments in the liquidity war? Does this sound like what is happening now where people are really trying to take control of the governance tokens because they actually decide where the liquidity goes? Is this to make sure that no group of whales can get hold of a lot of control tokens and have fun protocol games?
Jakub Gregus - In fact, yes, for us it was one of the bells.
Question - Do all these arguments of the community members about the LHDX token have ground under them - “Oh no, the second token appeared out of nowhere and the team was silent all this time, they probably want to deceive us”? As for me, this is an obvious solution to keep the system secure in the long run.
Jakub Gregus - Yes, we thought about it from the very beginning, but Curve Wars and other control token hacks were the last wake-up call for us. We decided for ourselves, “We can’t let this take its course and hope that this story does not affect us.” Huge money will be at stake, and there will be people who will try to cheat the system.
Jakub Panik - Perhaps I could add something from myself. We thought about these problems from the very beginning and we had some solutions to solve them. For example, the introduction of fees or a slight change in the Omnipool model, but each decision introduces serious problems in the design of the protocol. For example, if we change the design of Omnipool, then computing transactions on the network becomes too complex. This will become less efficient for trading. The second solution would interfere with the oracles, because it would create a huge spread on HDX. So we thought about limiting the purchase of HDX from the pool, but that wasn't the ideal solution and we felt it was the only good solution. In the end, the two-token solution turned out to be the best, because there were restrictions on the HDX supply that the previous design did not have.
Jakub Gregus - We spent hundreds of hours creating it with Blockscience and eventually came up with this mechanic. They have a very deep understanding of governance tokens. Blockscience actually simulated an attack on Gitcoin.
Question - I have a question about adding new tokens. Do you have an idea about the conditions for adding new tokens? Are we going to focus on big tokens at the very beginning like DOT, ETH? Will we also give smaller tokens a chance and how often do you plan to add new tokens? Rech Is it about one token per week, ten tokens per week?
Jakub Gregus — Yes, I've been thinking about it a lot lately. We can focus on major tokens and blue chips that are included in some indexes like DPI, MakerDAO, Compound, Aave. All tokens that pass security checks. They have internal security audit teams that dive very deep into all the possible risks associated with these tokens. All of them are fairly secure, then all L1 tokens that could be transferred to the bridge, and most of the legitimate projects from them. I can imagine that some legitimate teams with new projects can be listed faster. For example, a team at the level of Acala or Interlay appears in the ecosystem and we will see that they do an excellent job in all directions. They have security audits and there is little reason for us to wait a year or more for them to be listed. Obviously, we will also have certain volume and liquidity requirements.
Jakub Panik - We can also use Basilisk to assess whether assets have enough volume or are they safe enough for us. In the future, we are thinking about mechanisms that will allow us to include even less secure assets in Omnipool, but they will be limited by liquidity, so they will not pose a big risk to other assets.
Jakub Gregus - The key will be to combine multiple oracles. Chainlink has been field tested, but not everyone agrees. We can combine several other oracles and sources.
Question - The whole DeFi market has become quite crowded over the last year, various DEXs, various networks. What is your marketing strategy for generating sustainable liquidity on HydraDX? Omnipool seems like a pretty tidbit to hack, how do you rate Omnipool's liquidity drain risk and what are your risk mitigation measures?
Jakub Panik - The first thing to do is not to accept assets that are not safe, but we are also thinking of a few measures:
1 - Fees that are based on volume, if there is a huge pressure on one token, then we increase the fee to infinity so that traders cannot withdraw all liquidity in a short time
2 - We're thinking about an emergency stop button, this process will put it to a popular vote if something really bad happens, we should have that emergency measure in place to be able to stop trading if enough people decide to do so. I don't know if other projects have this feature.
Jakub Gregus - It's called "automatic fuse". I also have a vision for how to make decentralized storage more secure. I have an Entropy project in my field of vision, this project is one of the coolest Substrate, which will make trading much more secure, because all assets will be stored elsewhere, perhaps on Statemine or Statemint, DOTsama DeFi will use the rights to them and there will be some withdrawal period to have some buffer if any hack or any miscalculation or error in the code happens, not only in HydraDX or Basilisk, but also in bridges, XCM and other network components. And then, obviously, the most standard measures, such as security audits. We are trying to attract companies that specialize in this, but they have a lot of work in various ecosystems, mainly in Ethereum.
The first question was about the strategy that will help us succeed. In the short term, it will be liquidity mining initiatives that will be subsidized not only by us, but by us and other projects in the Dotsama ecosystem that will indeed offer their own incentives, so there will be incentives for a large influx of users into the ecosystem. There is also some talk of incentivizing the initial launch of ecosystem-wide liquidity through the Kusama and Polkadot treasuries. In fact, the Polkadot Treasury is the best candidate for this. The Polkadot ecosystem is far behind other ecosystems, so we need to put in as much effort as possible on every front, and large incentive programs are being successfully used by most of the alternative ecosystems. So, we have no other way to avoid this.
So, in the medium/long term, bonds that are a bit like Olympus bonds can be a great solution, but ours will be different, they will be similar to traditional bonds. In the case of Olympus, you have bonds with a maturity of 5 days, in fact we will be holding a kind of auction that will encourage bonds with the maximum possible maturity. There may be people who decide to lock their liquidity in HydraDX even for years. I would probably do the same, it will allow us to offer the highest APY we have in the secondary bond market. You can use these bonds as collateral on Angular or maybe other markets because bonds are much more effective at maintaining liquidity than liquidity initiatives. Most of them are very wasteful, this is another observation from the liquidity wars where you have invested billions of dollars in some pools, while these pools generate only a few thousand dollars of commission per day or their volume is much less, so the commissions are very small, so you don't need to incentivize pools to be so bloated.
But what you really need is to incentivize its use, so we will introduce incentives for trading volume, that makes a lot more sense, there is nothing unique or revolutionary about it, because all CEX, I don't mean only crypto CEX, but also stock exchanges or any exchanges in the world use this throughout the history of trading, incentivizing market makers or other parties by offering them discounts on commissions or negative discounts, they really make money on volumes, because if you have liquidity pools with billions of assets , which are just there and no one uses them for trade or exchange, this is not profitable and even wasteful.
And it's not just a look at AMM incentivization, but it's also incentivizing integrations to connect to HydraDX and Basilisk, you allow users to pay fees in the destination protocol they want to pay. For example, users of some wallet need to call a smart contract on Astar or ETH or somewhere else, but they do not want to pay a fee in more scarce assets. They want a predictable fee for this, so HydraDX and Basilisk can provide this service on the Back-end. Users will be able to pay fees in stablecoins whenever they want and using any wallet that will be integrated into our SDK. This is one of the first and most obvious use cases that we will have to finish and prepare on our end. We have seen that AMMs are perfect for these use cases, improving the experience in the cryptosphere, so this is a clear incentive for other application developers to connect with us. The second thing is moving our design forward and improving it over time, implementing some of the other features that we have in development. I've even compiled a short list of features that we'll be offering from the early days of HydraDX that have been implemented in other protocols by other teams. We can't wait to put everything together to maximize the efficiency of DeFi capital. We will release an updated roadmap with detailed bullet points that will
Jakub Gregus — We had communication problems for almost our entire history, because until now, we were a very small team of 6 people who managed the entire project, tested and integrated all the new products from Polkadot, Substrate and Parity. We've been doing all this insanely complex research, but fortunately, we've filled in all the missing positions in our organization that we've only recently realized, to our regret. We were too deeply immersed in research or development. For most of November and December, we were looking for people who could fill the holes in the organization, not only in the position of developers, but also in the field of communication, organization, operations and in general, those people who could provide a more professional approach to everything, because very soon We are already planning to launch the project.
All communication with the community should be more subtle and more careful about all changes. The latest blog failure just highlighted the importance of a deeper understanding of what it would look like for the community who are less dedicated or less involved in the project so they don't understand all the nuances. Every other major change will be announced gradually, at least we will randomly select a few people from the community, get feedback and move on to more open development. Almost all of our code is open source and we discuss everything publicly with the community, but in terms of the development process, we are not very ready for this. We tried to do this from the very beginning, but it was not very effective, because you need to have a lot more organization and much more carefully planned tasks, you need to analyze these tasks much better. There were not enough people on the team for this, which we eventually resolved, so there is no one-size-fits-all solution to this problem, but we are working on it. Starting with the publication of this year's strategy and an updated roadmap. This will be a great starting point and everything will be public and for the community first.
Now we have several people who are responsible for the development of the community. Basically, even aggregating all of our research and specs was really a challenge for new people, which we finally solved too, and the reason there were big gaps between us and you in the past is because we've been moving so far. too fast. We tried to be first on Rococo and tried to make the first cross-chain call and transfer. I do not want to make excuses, but for us there was too much of everything and we constantly lacked not only developers, but also positions in the community. We have hundreds of channels. I don't mean with the community, but with all other projects. Projects who want to integrate with us, who want to collaborate with us, who want to work with us or join research, join liquidity mining, etc. We had no intention of running away and hiding from everyone, but there was too much in one moment, finally things get better. We have several people who are responsible for specific things like back-end, front-end, community, security, research, project management, hiring, etc.
Jakub Panik - Let me talk about the current state of development. We had to take a step back because we needed to hire more people. Seeing how everything starts to crumble due to several people visiting the crowdlon interface, we realized that we need to create a strong infrastructure and cooperate with other projects in this direction. We've taken a step back, we've started working with Subsquid on data processing to give users the right data so they can use the apps and be fast and productive. It took a lot more effort than we thought, but we decided to do it because the long term thing that will help us in the future is by doing this we prevent all further problems. These kinds of problems would still happen at the very beginning, but to a much lesser extent, we knew that if 5 thousand people came to our application at one moment, then it would simply collapse and no one would use it. We decided to take a step back and rebuild everything from scratch, create the right infrastructure and this can be 90% usable for HydraDX. We don't just do this for Basilisk, it's a reusable fundamental work. And this will significantly reduce the launch of HydraDX.
I don't want to give dates or anything like that, we've made these mistakes before. It won't be as long as some people might think. Don't get us wrong, we've been involved in startups, but most startups don't have any communication with the community, you just talk to your investors and they always agree if you're doing well. For us, this is something new and in the future we will work better in this area.
Jakub Gregus - For HydraDX, Colin created the final Omnipool spec, which has been double-checked and validated by Blockscience. Developers start working on its implementation. We realized that the HydraDX implementation is much simpler than Basilisk, as Jakub already mentioned. Much of the imported middleware will already be re-engineered.
Colin - As far as Omnipool is concerned, we are at the stage where the basic structure is already in place. We are modeling various scenarios, such as how high fees should be and what makes HydraDX attractive in terms of liquidity. The basic structure is already defined and ready to be implemented.
Question - When you talk about adding assets to the Omnipool, I would like to clarify whether this will be done by the technical community or HydraDX management or, for example, through referendums based on decisions made by HDX holders?
Second question - You said that HDX holders will be charged for using the protocol. How will these fees be distributed to HDX token holders? Could you elaborate on what the distribution mechanics will look like, will it be a share of HDX or will the fees be distributed to holders in the form of dividends or something similar? Can you tell more about this?
Colin - We don't have a general idea of what these fees will look like when the network launches. I gave a list of examples of what could be included in the protocol governance process, but the bottom line was that even if we start without a fee distribution, HDX tokens will be used to govern the protocol from the perspective of the substrate, you will have to change a lot when upgrading during execution. HDX token holders will indeed be able to change or raise the fee to distribute protocol fees to HDX token holders. They have the ability to manage the tokenomics and protocol, but it won't happen on the very first day.
Question - What will be the procedure or mechanics of fees? For example, the management decides that it wants to increase the fees for using the protocol and distribute them among HDX token holders. And after performing a runtime upgrade, what will the mechanics look like? In the form of providing liquidity or staking?
Colin - We have yet to work it out. We are looking at something like a Curve distribution.
Jakub Panik - Fees will be collected in some treasury or there may be a secondary treasury and when we have this treasury we can open access to it and people will be able to receive part of these funds.
Jakub Gregus - Fee allocation is a very complex process because you don't want to turn your token into a security and provoke the regulators. Regulators can be very sensitive to this, pure reallocation of commissions is a clear digital analogue of securities, so the regulators will fine us or close our activities. In cryptocurrencies, you can get away with ransom, which is a kind of indirect redistribution or burn model. There is also the Curve Escrow Model, which we see becoming the gold standard in the industry. Many protocols that use these mechanics are reviewed with their lawyers and everything looks fine.
You need to do some work and this work materializes through the right to vote. This is the essence of the Curve model, which for us is the best candidate for implementation. The entire community is already familiar with this model. At least for once we will not shock and mislead people.
Jakub Panik - Regarding the technical committee. We would like to create a technical council because of the regulators. We need to create a committee that can help us add new tokens or maybe we will find the necessary mechanism that can reduce all risks when adding new tokens to Omnipool. This must be resolved first.
Jakub Gregus—Some networks provide Compound services. They measure the liquidity properties of assets and then, based on the results and models, set collateral ratios, interest rates, and so on. I would also like to create an economic council, not just a technical council. The economic council will support issues related to the economy of tokens, what should be the volume ratio, what should be the size of commissions, what part of the commission should be taken, how to use liquidity. Maybe the HydraDX management can redirect some of the liquidity into DOT tokens and participate in crowdlons, and use the rewards from these crowdlons as liquidity for HydraDX and process it. We will support all these and other economic issues with the technical committee, which will be responsible for the technical safety of these assets. I hope that this committee will be made up of the best people in the industry and I will gladly vote for a very good salary for them.
Question - When we move to the mainnet, the fear of fines (slash) will become real in the mainnet. It's not that scary on the testnet, but once the mainnet goes live, it'll be real. I hear a lot of comments on Discord about some validators not getting nominators. Will it lead to fines? Will it be a problem? Will it make people afraid to nominate? Will this reduce the amount of tokens staked? Because we need more validators to strengthen the network. If the validator goes offline, will he be fined?
Jakub Panik - Regarding the last part of the question, you can go offline and update your node and validator. It is not forbidden, on the contrary, it is encouraged. As for the first part of the question, we probably won't have staking as it is now because the shared security of Polkadot and Kusama means we don't need to maintain it. We thought about staking and using it on the mainnet, but we don't need staking at the very beginning. We only need node holders (Collators - Collators), we do not need a certain number of people doing this, as it is happening now. This is a very big problem because we need to figure out what to do with the people who are doing the validation on the testnet right now, but we have a solution that is probably too early to tell, but we want to decentralize the infrastructure. Not only the internal infrastructure, but also the node infrastructure. We may have incentive schemes for node holders supporting our infrastructure, but it's still too early to tell. We'll refine these ideas as the mainnet launches.
Jakub Gregus - In this case, the penalties will not be as hard or even they will be impossible. Worst case, the node will be unavailable, in which case the user will automatically switch to another node. Listening to you, I understand that there is still a lot of confusion in the Polkadot and Substrate community regarding the role of validators and node holders (Collators), because parachain validators are Polkadot validators. As a parachain, you don't need to have your own validators, you need to have collators. It can be 1 node, 10 nodes, 100 nodes, it doesn't matter, but these collator nodes cannot make invalid network state transitions, so their security properties are not as important as the security of the validators. Collators and full nodes are more service providers than POS validators.
Substrate offers a super powerful feature called off-chain worker nodes (off-chain workers) and it can be used to do a lot of useful work that can be done in parallel or done offline, but they still need to have some share of the tokens at stake to ensure proper validation. This data can be added to the network. These nodes will have staking enabled and these nodes can match off-chain transactions, so it doesn't have to be done entirely on the blockchain, it could be like zero knowledge proof to save a lot of space, etc.
Jakub Panik - It could be data providers. The middleware nodes that provide the data, they can have some share of the HDX tokens staked and prove that my data is correct or if it is incorrect they will be penalized.
Question - My question is about your travels and how you feel about the ecosystem as a whole. What is your view on Substrate after working with it for 1 year. Has it become more or less powerful, in your opinion? And how do you see interaction with other teams? I know you mentioned kUSD as a starting point in Omnipool. What are Acala's incentives to encourage HydraDX when they have their own swap functionality? How do you see yourself in the ecosystem and at what level is your spirit and the spirit of your team?
Jakub Gregus - That's a great question. For example, in early 2020 when we started developing on Substrate with HydraDX, we had a lot more concerns about the ecosystem. There were not many parachains, at least not seen or planned. Now this is no longer a concern. On the other hand, I personally do not see a large flow of new parachains, to be honest, this is my opinion. On the other hand, interesting Substrate networks are very specialized and have very unexpected use cases. I'm much more optimistic about Substrate than I've ever been in conversations with many other developers over the past year. Especially talking to other developers from other ecosystems, when I see how it is implemented, how many ideas or parts of Substrate are implemented in other networks. Obviously the Cosmos SDK is more interoperable because it's much older and field proven, it didn't have any stressful changes in 2019 or 2020 so a lot of projects have moved there, it's a more stable development solution. But when you see that Polygon uses Substrate to access data in its network or Octopus Network on Near protocol uses Substrate to connect specific application networks to Near or Compound chooses Substrate.
Also I see that other Cosmos developers have a lot of respect for Substrate and some of them even choose Substrate for their latest projects or Ethereum developers who are developing an ETH 2.0 client are actually moving to Substrate and are very happy and very happy in particular , off-chain worker nodes, and other Substrate features, especially runtime updates.
A few weeks ago I was talking to the guy who organized and did the first hard fork for Cosmos. We agreed how difficult it is to implement this on any network or system, while any Substrate network, be it Polkadot or Kusama or whatever, hard forks just like that, even every 2-3 weeks, it is simply unthinkable by comparison. with other teams and other projects. I see this as one of the underrated features because there were some mantras that the blockchain should be as immutable as possible, but I don't think that's true. Even Bitcoin in 2018 or 2019 had very critical inflation bugs and you wouldn't expect the old underlying code and simpler blockchain to have such critical flaws that should be fixed as soon as possible and they were super critical. After all, you still need people to maintain the software, to maintain the network, to provide power and electricity.
Every network is still just a tool for people to coordinate and get things done, so you can't just disappear and say, "Yeah, it's the same software." No, it needs to be maintained and that's why I think runtime updates are a very elegant way to evolve and evolve into something better over time - it's the coolest and killer feature Substrate has to offer.
In terms of the ecosystem, meeting many founders makes me more calm and confident, especially the developers make me more calm, as I realized that many of them are super talented people who are moving to Substrate or Polkadot from other ecosystems such as Algorand , Cosmos and they are really wonderful people, not only in terms of knowledge and experience, but also as individuals. Obviously, there are people who are not so cool, who do not have honest intentions to build something great, rather to make quick money, but such people are everywhere. I'm much more confident and satisfied ecosystem than before. On the other hand, the ecosystem has many drawbacks because it is one of the most difficult ecosystems to develop, Avalanche and everything that uses the EVM works great. Of course, you can fork anything that lives on Ethereum and it will work great and just use Metamask.
Using all the infrastructure and tools already built helps any developer to be super fast and deploy projects in weeks or months, while Substrate, Polkadot doesn't have a lot of tried and tested code that you can fork and build something on top of. then improve it. Another thing I could mention is the instrumentation issue, since Substrate and Polkadot are changing too fast, which also caused a lot of delays on our end. Many of the tools and infrastructure were outdated and didn't work very well. It's been very hard in 2019, 2020 and 2021, but it's just a pain in the ass for any new ecosystem. Solana last year or 2020 was very similar, it was very difficult to develop and run something on it. There was no code for the fork, there was no toolkit. It had to be created from scratch. This is also a great opportunity for anyone who can see these problems and instead of complaining, just find solutions and ask for a grant from the treasury to implement these solutions.
The treasury at Polkadot and at Kusama has never been as full as it is now. I think this will finally push a lot of people to work in the ecosystem. So there is everything in the ecosystem as a whole, good things and bad things, this applies to every ecosystem. I've talked to many founders from other ecosystems and everyone complained about something. As always. The whole crypto industry is still at a very early stage, even Ethereum is still at a very early stage of development. There are so many things that are left out or not well done. I think we should see this as a great opportunity, not as a hindrance, and provide support to people who are building ecosystems.
We are at a stage where we need to build an ecosystem and we are moving very strongly in this. Now a stage that I didn't expect in 2019 or 2020, there was much more doubt, but now that we see parachains and XCM launched, all these things work and all the FUD from competitors no longer matters. This is no longer the case, the ecosystem is working and now we can finally move on to the Polkadot and Substrate phase, when most of the code has already been written and implemented, developers can only optimize and improve it.
I'm more optimistic than ever, but other projects should be more about working together as a family rather than trying to build everything on their own. Otherwise, there will be no ecosystem, if they so desire, they can go and build their own ecosystem. It would be great to see better collaboration between the teams, but so far it hasn't happened because everyone was too busy developing their own projects. Finally, more teams are getting to the stage where they are trying to integrate, for example Interlay is doing a lot of integrations, Moonriver and Moonbeam are launching XCM, Acala and Astar and a few other top teams are getting to the stage where they are doing integrations with other projects. We are approaching the stage of an ecosystem, not just a collection of individual networks.
Jakub Panik - I had a conflicted relationship with Substrate. First love, then hate, then love again. Right from the start, Substrate allowed us to roll out the blockchain very quickly, which is unprecedented. We've tried using more developers, we've worked with more developers before, but this was the fastest solution we've tried. After that, we had to do something special and it was very difficult because there were a lot of updates and it slowed us down a lot. It was a growing pain, made worse by the need to launch Polkadot and Kusama. Parity and other developers needed to quickly solve problems and quickly launch the network, I understood this and a few times when we had problems, I dived deep into the Substrate code and thought: “Oh shit, there are millions of lines of code in it”, but this doesn't look like bloated code, it's really good and concise code with research and documentation underneath. I think I saw some blog where Gavin Wood wrote that Substrate has 2 million lines of code, which is a huge help if you have a framework that you can use and it's really powerful. You can build on its base, you can change things as you see fit. Deep diving takes a long time because things change, but right now we're talking to Robert Habermeier and the people who make Substrate. Basically, they are done with the basic functionality. It will not change much, so if you want to start creating something, then there is no better time. We know a lot about this, because we started earlier, but you won't There are no problems like ours. I want to inspire all the founders who were confused by the large number of changes that slowed everyone down, this is no longer a problem. Most things are done and ready, if you're thinking about creating something you should do it right now and I think it's really powerful. I don't know the best option for creating a custom blockchain right now. We are constantly looking for other solutions, but everything has its pros and cons, but it's a really good ecosystem with professional developers and support.
Q - You talked about the Web3 Foundation grants you received. How are things going with the Web3 Foundation. Do they provide support for eg Gavin Wood for HydraDX? Does this mean that the whole ecosystem is waiting for Omnipool and sees value in it? What do you think?
Jakub Gregus - I don't know how aware Gavin Wood is of us, probably yes. We don't talk much with the Web3 Foundation, more with Parity. The Web3 Foundation funds and organizes the research, but the research is ultimately implemented by Parity. Parity is a more important player in the ecosystem in a sense, and it would be better for projects to be closer to them. The grant program is useful for newbies who see some opportunities for tool development or if you want to develop something specific that might be useful to others and that will lower your development costs a bit, but you can't rely on grants only if it's something important, like an ETH bridge or a BTC bridge. We had a validator monitoring grant in 2019 but it didn't get renewed for some weird reason, it got us really pissed off at the start and then got us moving forward, we decided to build something that doesn't depend too much on grants. If you want to create something that is missing in the ecosystem, asking for a grant is one of the best ways you can do it, you can also get some guidance from them on how to implement it. Are you talking about the Subaction Grant?
Question - I would like to know if you cooperate with Parity? Do they expect a product from you and see its value for use in the ecosystem?
Jakub Gregus - This organization needs to be very careful what they report from the outside, they need to remain neutral, they don't want a lot of hype or reporting on specific projects so as not to give the impression to the community or retail people that this is the chosen and correct project, where should you invest your money. Sometimes some projects in the Polkadot ecosystem and other ecosystems use this hype or marketing - "Oh, we get a grant from the Web3 Foundation, they support us", which is very doubtful.
Question - I mean this is not a regular DEX or AMM, it has to be a breakthrough technology. Will the other party appreciate your ideas?
Jakub Gregus - I personally think they don't understand how revolutionary or innovative the project is because most of the people who work for the Web3 Foundation or Parity are not DeFi experts. They are experts in cryptography, Rust, back-end development, networking and other areas. So when they see an Omnipool or something like that, they can't tell how it's better than Sushiswap or another XYK model.
Jakub Panik - We work with these people at Parity and we love them not only as developers but as people in general. They are very good guys. I think they have some confidence in our project and what we are doing.
Jakub Gregus - When we ask them to review the code for us, they review it and then tell us that they didn't find any major bugs, only minor cosmetic issues. They tell other colleagues about it and spread a good word about us when they see other solutions for the infrastructure they are also trying to develop. They know how difficult it is with much more resources, and we did it with less. They can only help with what they are familiar with. They helped us when we launched on the Rococo testnet or when we made the first XCM.
Jakub Panik - This is not the best question for us, because me and Jakub, we hate bragging rights. It is better to ask Parity.
Jakub Gregus - Unfortunately, in my opinion, they don't think much about how it's better or how it's different from other projects, they just don't have time to think about it. There is a lot of pressure on everyone at Parity and the Web3 Foundation to have Polkadot and Kusama fully up and running and successful because everyone expects it to be the more successful or one of the most successful ecosystems because Gavin Wood created Ethereum. They have a lot of work.
Q - Do you want to implement fractional, algorithmic pricing for LHDX?
Jakub Gregus - This is an impressive and interesting question. For example, when we started working with Blockscience in December last year. One of their first questions was: “Do we want to make a stablecoin out of LHDX? It would be significant but simplified a lot of things.” We thought it was an impressive idea, but we don't want to make a stablecoin because it has too many nuances. While we were thinking about it, maybe not a stablecoin, but some asset like RAI Reflexer, which would be very impressive. In addition, the problem that can arise with USD stablecoins or other fiat currencies is starting to develop and inflation is a very serious problem. So I'm not sure if stablecoins would be attractive, maybe some Anchor assets associated with them. If you look at the FEI protocol, it's very similar to HydraDX, but they don't have their own DEX or Terra is very similar in some ways, but having a native token as collateral is very dangerous. In May, we saw Terra almost enter a death spiral. For this reason, they are thinking about introducing BTC as another asset to their pool. These ideas are relevant and very interesting. This would allow for the creation of a stablecoin that can be highly scalable. Also, it could have very organic demand as it would stabilize the pool, but right now it's very futuristic. It's possible, but it's not our priority. If anyone wants to work on it, we'll be happy to support and maybe even fund it, but it's not part of our agenda for the day. Let's start HydraDX and Basilisk first. We have a lot of ideas or features that need to be rolled out to the HydraDX and Basilisk network.
Jakub Gregus - I just want to take a look at last year and thank you all for your exceptional patience. We didn't communicate well and often left you in chaos. This was not done on purpose, no one was hiding, no one was spending your money. Thank you all and we look forward to your support as we finally launch our mainnet!
HydraDX is a Basilisk cross-chain liquidity protocol on Kusama founded by Mattia Gagliardi and Jakub Greguš.
In the old financial world, liquidity is fragmented among many gatekeepers, protecting their business interests, profiting from the upside, while socializing the downside. The cost of liquidity is also high because legacy technology consists of siloed databases.
Blockchain is changing this unfortunate reality by providing an open and universally accessible technology layer for transaction settlement and asset exchange.
However, the dominant blockchain application platform – Ethereum – suffers from being a general platform for many varied use cases – the network is getting clogged and fees are high.
In order to achieve the optimal properties, HydraDX is built as a parachain – specialized blockchain in the Polkadot network. It is benefiting from shared security, speed and flexibility of the Substrate framework while remaining optimized for a single purpose: enabling fluid programmable value exchange.
Thanks to planned interoperability between Polkadot and Ethereum ecosystems, Hydra will tap into Ethereum’s liquidity, talent and community, merging the best of the both worlds.
1)https://hydradx.substack.com/p/hydradx-omnipool-part-1
Earlier, the HydraDX team published an article on their blog in which they introduced the new LHDX token. This caused a lot of confusion among the community and many people asked the team to provide more details on how this token will be used, will it decrease the value of the original HDX token or increase it? The team decided to hold an AMA session on Discord as soon as possible, we have made a detailed translation of this AMA session for you.
Jakub Gregus — I will try to describe as simply as possible our vision behind the LHDX token and why we had to make these changes, as well as the main problems and concerns associated with the previous design of the HDX token.
Imagine that some liquidity mining whales who often use their capital in various liquidity stimulation programs. Let's imagine that these guys want to have power over the entire HydraDX protocol, whether it's short term or long term, so they can get power in Sushiswap or other protocols through liquidity mining. In our case, it can be even worse, because the security of Proof-Of-Stake protocols is usually at a high level, because if someone wants to buy any POS tokens, it does not matter Polkadot or Kusama, Solana or any other POS blockchain. As more tokens are bought on the market, slippage and fees make any additional purchases more difficult because the price starts to rise a lot.
In fact, due to the design of the Omnipool, there will be no increase in price and some whales may invest a lot of capital in Omnipool, which will immediately increase the overall size of the Omnipool. They have 15,000 BTC and are putting it into the Omnipool. During this time, Omnipool will match the amount of liquidity they have deposited and the pool will be huge, which means that slippage will be practically zero, even on any large trade. Even if they start buying millions of HDX, the price will still not move, which is also bad for HDX holders who want to see some increase in the value of the HDX token over time. It's also bad from a security point of view of the entire protocol, because someone could perform an attack that would be much cheaper than an attack on any other Proof-Of-Stake system, because there would be no restrictions on the purchase of HDX tokens. The main problem is that they can gain control over the protocol. They don't even need to provide a lot of liquidity to the pool because Omnipool can be backed by a lot of different assets. If they are smart and patient, it will be impossible to identify these attempts and distinguish them from normal liquidity mining and they can easily carry out this attack. That's why we thought more deeply about it when we saw the current Curve wars, Magic Internet Money is being used to devastate Anchor's yield resources.
Protocol manipulation attacks can have a very low threshold to carry out, this scares us a lot as we take the overall security of the platform as seriously as possible, which also leads to delays in front-end development on Basilisk, since most of this code, approximately 80% will also be used in HydraDX. We are rethinking things even more seriously than before. We realized that “oh shit, this can be very dangerous” and after that we had long internal discussions about this. Before posting about the introduction of the second token, we expected that this could cause a lot of questions, but no one expected that this would cause a FUD of such force, supposedly we are stealing liquidity from the protocol and leaving for the islands. It was clearly ugly and not even close to the truth. That's the story, and I hope everyone understands why we're going to use this protocol design and why it's actually bullish, even for HDX. Thus, the HDX token will have a very strong position in the protocol. So we can move on to questions.
lolmschizz - Governance in Substrate is not the same as governance in any other Ethereum protocol. Almost everything can be changed, it does not matter who will implement it.
Jakub Gregus - For example, dYdX governance token or OpenSea governance token. They don't have much power over the protocols. The dYdX team has been pretty clear about the process of managing dYdX and voting with dYdX, and they were quick to point out that you won't be able to enable sharing of commissions earned by the dYdX protocol. On the other hand, on Substrate, you can choose to switch the network connection from Polkadot to Cosmos, whatever the token holders desire. Someone will prepare the necessary code for this, and the token holders vote for it, and it's not difficult at all if you really want to do it. On the other hand, the runtime update is a very powerful and useful tool and should be taken seriously because it can change absolutely everything.
lolmcshizz - A question a council member asked us. What is the use and the potential value of HDX if we move to a two-token model? What criteria will be used to increase the value of HDX?
What is the advantage of these two tokens? What exactly will be possible to do with HDX?
Jakub Gregus - Yes, exactly. It will be much more difficult to achieve the appreciation and value of HDX tokens if the protocol is backed by billions of assets, which is what we have been aiming for since day one, it will be almost impossible to push the price higher because slippage will be zero even if we trade millions of HDX simultaneously.
Colin - Whatever the pool token, it can be anything, in our case it is LHDX, whatever this central liquidity token is, its supply can be changed very drastically. Therefore, our main concern was that if the protocol succeeds and a lot of liquidity inflows, and HDX is the central liquidity token, each HDX token and its supply will increase greatly, which will dilute the supply of tokens from those participants who hold HDX. Thus, millions of HDX tokens are issued and dilute the total supply - this can be a big hurdle. At the same time, the second LHDX token allows us to take on the volatility of the supply, which gives us a stronger token economy. I would say that HDX and LHDX are tokens that are in a zero-sum game, and with the right design, it can be positive. We need to think about the right combination and features for these two tokens and what system will work well. Here are some applications of the HDX token in protocol governance: if the system works well, the ability to decide which token will be added to this protocol or where exactly the fees will be distributed. The value of such management should certainly increase.
Jakub Gregus - Exactly, the whole history of the market, I was against governance tokens, I really hate them. In my opinion, it's really unfair to the community to give some tokens that are meant to govern the protocol, but in reality, they don't have the power to change anything. We see such cases with OpenSea or dYdX tokens. For HydraDX, this is a completely different scenario, the community can really change everything and choose where to extract value and where not. This should also be decided by the community and researched in every possible way on how best to capitalize on all of HDX's infrastructure and liquidity.
Q - Was the decision to redesign the token entirely in light of recent developments in the liquidity war? Does this sound like what is happening now where people are really trying to take control of the governance tokens because they actually decide where the liquidity goes? Is this to make sure that no group of whales can get hold of a lot of control tokens and have fun protocol games?
Jakub Gregus - In fact, yes, for us it was one of the bells.
Question - Do all these arguments of the community members about the LHDX token have ground under them - “Oh no, the second token appeared out of nowhere and the team was silent all this time, they probably want to deceive us”? As for me, this is an obvious solution to keep the system secure in the long run.
Jakub Gregus - Yes, we thought about it from the very beginning, but Curve Wars and other control token hacks were the last wake-up call for us. We decided for ourselves, “We can’t let this take its course and hope that this story does not affect us.” Huge money will be at stake, and there will be people who will try to cheat the system.
Jakub Panik - Perhaps I could add something from myself. We thought about these problems from the very beginning and we had some solutions to solve them. For example, the introduction of fees or a slight change in the Omnipool model, but each decision introduces serious problems in the design of the protocol. For example, if we change the design of Omnipool, then computing transactions on the network becomes too complex. This will become less efficient for trading. The second solution would interfere with the oracles, because it would create a huge spread on HDX. So we thought about limiting the purchase of HDX from the pool, but that wasn't the ideal solution and we felt it was the only good solution. In the end, the two-token solution turned out to be the best, because there were restrictions on the HDX supply that the previous design did not have.
Jakub Gregus - We spent hundreds of hours creating it with Blockscience and eventually came up with this mechanic. They have a very deep understanding of governance tokens. Blockscience actually simulated an attack on Gitcoin.
Question - I have a question about adding new tokens. Do you have an idea about the conditions for adding new tokens? Are we going to focus on big tokens at the very beginning like DOT, ETH? Will we also give smaller tokens a chance and how often do you plan to add new tokens? Rech Is it about one token per week, ten tokens per week?
Jakub Gregus — Yes, I've been thinking about it a lot lately. We can focus on major tokens and blue chips that are included in some indexes like DPI, MakerDAO, Compound, Aave. All tokens that pass security checks. They have internal security audit teams that dive very deep into all the possible risks associated with these tokens. All of them are fairly secure, then all L1 tokens that could be transferred to the bridge, and most of the legitimate projects from them. I can imagine that some legitimate teams with new projects can be listed faster. For example, a team at the level of Acala or Interlay appears in the ecosystem and we will see that they do an excellent job in all directions. They have security audits and there is little reason for us to wait a year or more for them to be listed. Obviously, we will also have certain volume and liquidity requirements.
Jakub Panik - We can also use Basilisk to assess whether assets have enough volume or are they safe enough for us. In the future, we are thinking about mechanisms that will allow us to include even less secure assets in Omnipool, but they will be limited by liquidity, so they will not pose a big risk to other assets.
Jakub Gregus - The key will be to combine multiple oracles. Chainlink has been field tested, but not everyone agrees. We can combine several other oracles and sources.
Question - The whole DeFi market has become quite crowded over the last year, various DEXs, various networks. What is your marketing strategy for generating sustainable liquidity on HydraDX? Omnipool seems like a pretty tidbit to hack, how do you rate Omnipool's liquidity drain risk and what are your risk mitigation measures?
Jakub Panik - The first thing to do is not to accept assets that are not safe, but we are also thinking of a few measures:
1 - Fees that are based on volume, if there is a huge pressure on one token, then we increase the fee to infinity so that traders cannot withdraw all liquidity in a short time
2 - We're thinking about an emergency stop button, this process will put it to a popular vote if something really bad happens, we should have that emergency measure in place to be able to stop trading if enough people decide to do so. I don't know if other projects have this feature.
Jakub Gregus - It's called "automatic fuse". I also have a vision for how to make decentralized storage more secure. I have an Entropy project in my field of vision, this project is one of the coolest Substrate, which will make trading much more secure, because all assets will be stored elsewhere, perhaps on Statemine or Statemint, DOTsama DeFi will use the rights to them and there will be some withdrawal period to have some buffer if any hack or any miscalculation or error in the code happens, not only in HydraDX or Basilisk, but also in bridges, XCM and other network components. And then, obviously, the most standard measures, such as security audits. We are trying to attract companies that specialize in this, but they have a lot of work in various ecosystems, mainly in Ethereum.
The first question was about the strategy that will help us succeed. In the short term, it will be liquidity mining initiatives that will be subsidized not only by us, but by us and other projects in the Dotsama ecosystem that will indeed offer their own incentives, so there will be incentives for a large influx of users into the ecosystem. There is also some talk of incentivizing the initial launch of ecosystem-wide liquidity through the Kusama and Polkadot treasuries. In fact, the Polkadot Treasury is the best candidate for this. The Polkadot ecosystem is far behind other ecosystems, so we need to put in as much effort as possible on every front, and large incentive programs are being successfully used by most of the alternative ecosystems. So, we have no other way to avoid this.
So, in the medium/long term, bonds that are a bit like Olympus bonds can be a great solution, but ours will be different, they will be similar to traditional bonds. In the case of Olympus, you have bonds with a maturity of 5 days, in fact we will be holding a kind of auction that will encourage bonds with the maximum possible maturity. There may be people who decide to lock their liquidity in HydraDX even for years. I would probably do the same, it will allow us to offer the highest APY we have in the secondary bond market. You can use these bonds as collateral on Angular or maybe other markets because bonds are much more effective at maintaining liquidity than liquidity initiatives. Most of them are very wasteful, this is another observation from the liquidity wars where you have invested billions of dollars in some pools, while these pools generate only a few thousand dollars of commission per day or their volume is much less, so the commissions are very small, so you don't need to incentivize pools to be so bloated.
But what you really need is to incentivize its use, so we will introduce incentives for trading volume, that makes a lot more sense, there is nothing unique or revolutionary about it, because all CEX, I don't mean only crypto CEX, but also stock exchanges or any exchanges in the world use this throughout the history of trading, incentivizing market makers or other parties by offering them discounts on commissions or negative discounts, they really make money on volumes, because if you have liquidity pools with billions of assets , which are just there and no one uses them for trade or exchange, this is not profitable and even wasteful.
And it's not just a look at AMM incentivization, but it's also incentivizing integrations to connect to HydraDX and Basilisk, you allow users to pay fees in the destination protocol they want to pay. For example, users of some wallet need to call a smart contract on Astar or ETH or somewhere else, but they do not want to pay a fee in more scarce assets. They want a predictable fee for this, so HydraDX and Basilisk can provide this service on the Back-end. Users will be able to pay fees in stablecoins whenever they want and using any wallet that will be integrated into our SDK. This is one of the first and most obvious use cases that we will have to finish and prepare on our end. We have seen that AMMs are perfect for these use cases, improving the experience in the cryptosphere, so this is a clear incentive for other application developers to connect with us. The second thing is moving our design forward and improving it over time, implementing some of the other features that we have in development. I've even compiled a short list of features that we'll be offering from the early days of HydraDX that have been implemented in other protocols by other teams. We can't wait to put everything together to maximize the efficiency of DeFi capital. We will release an updated roadmap with detailed bullet points that will
Jakub Gregus — We had communication problems for almost our entire history, because until now, we were a very small team of 6 people who managed the entire project, tested and integrated all the new products from Polkadot, Substrate and Parity. We've been doing all this insanely complex research, but fortunately, we've filled in all the missing positions in our organization that we've only recently realized, to our regret. We were too deeply immersed in research or development. For most of November and December, we were looking for people who could fill the holes in the organization, not only in the position of developers, but also in the field of communication, organization, operations and in general, those people who could provide a more professional approach to everything, because very soon We are already planning to launch the project.
All communication with the community should be more subtle and more careful about all changes. The latest blog failure just highlighted the importance of a deeper understanding of what it would look like for the community who are less dedicated or less involved in the project so they don't understand all the nuances. Every other major change will be announced gradually, at least we will randomly select a few people from the community, get feedback and move on to more open development. Almost all of our code is open source and we discuss everything publicly with the community, but in terms of the development process, we are not very ready for this. We tried to do this from the very beginning, but it was not very effective, because you need to have a lot more organization and much more carefully planned tasks, you need to analyze these tasks much better. There were not enough people on the team for this, which we eventually resolved, so there is no one-size-fits-all solution to this problem, but we are working on it. Starting with the publication of this year's strategy and an updated roadmap. This will be a great starting point and everything will be public and for the community first.
Now we have several people who are responsible for the development of the community. Basically, even aggregating all of our research and specs was really a challenge for new people, which we finally solved too, and the reason there were big gaps between us and you in the past is because we've been moving so far. too fast. We tried to be first on Rococo and tried to make the first cross-chain call and transfer. I do not want to make excuses, but for us there was too much of everything and we constantly lacked not only developers, but also positions in the community. We have hundreds of channels. I don't mean with the community, but with all other projects. Projects who want to integrate with us, who want to collaborate with us, who want to work with us or join research, join liquidity mining, etc. We had no intention of running away and hiding from everyone, but there was too much in one moment, finally things get better. We have several people who are responsible for specific things like back-end, front-end, community, security, research, project management, hiring, etc.
Jakub Panik - Let me talk about the current state of development. We had to take a step back because we needed to hire more people. Seeing how everything starts to crumble due to several people visiting the crowdlon interface, we realized that we need to create a strong infrastructure and cooperate with other projects in this direction. We've taken a step back, we've started working with Subsquid on data processing to give users the right data so they can use the apps and be fast and productive. It took a lot more effort than we thought, but we decided to do it because the long term thing that will help us in the future is by doing this we prevent all further problems. These kinds of problems would still happen at the very beginning, but to a much lesser extent, we knew that if 5 thousand people came to our application at one moment, then it would simply collapse and no one would use it. We decided to take a step back and rebuild everything from scratch, create the right infrastructure and this can be 90% usable for HydraDX. We don't just do this for Basilisk, it's a reusable fundamental work. And this will significantly reduce the launch of HydraDX.
I don't want to give dates or anything like that, we've made these mistakes before. It won't be as long as some people might think. Don't get us wrong, we've been involved in startups, but most startups don't have any communication with the community, you just talk to your investors and they always agree if you're doing well. For us, this is something new and in the future we will work better in this area.
Jakub Gregus - For HydraDX, Colin created the final Omnipool spec, which has been double-checked and validated by Blockscience. Developers start working on its implementation. We realized that the HydraDX implementation is much simpler than Basilisk, as Jakub already mentioned. Much of the imported middleware will already be re-engineered.
Colin - As far as Omnipool is concerned, we are at the stage where the basic structure is already in place. We are modeling various scenarios, such as how high fees should be and what makes HydraDX attractive in terms of liquidity. The basic structure is already defined and ready to be implemented.
Question - When you talk about adding assets to the Omnipool, I would like to clarify whether this will be done by the technical community or HydraDX management or, for example, through referendums based on decisions made by HDX holders?
Second question - You said that HDX holders will be charged for using the protocol. How will these fees be distributed to HDX token holders? Could you elaborate on what the distribution mechanics will look like, will it be a share of HDX or will the fees be distributed to holders in the form of dividends or something similar? Can you tell more about this?
Colin - We don't have a general idea of what these fees will look like when the network launches. I gave a list of examples of what could be included in the protocol governance process, but the bottom line was that even if we start without a fee distribution, HDX tokens will be used to govern the protocol from the perspective of the substrate, you will have to change a lot when upgrading during execution. HDX token holders will indeed be able to change or raise the fee to distribute protocol fees to HDX token holders. They have the ability to manage the tokenomics and protocol, but it won't happen on the very first day.
Question - What will be the procedure or mechanics of fees? For example, the management decides that it wants to increase the fees for using the protocol and distribute them among HDX token holders. And after performing a runtime upgrade, what will the mechanics look like? In the form of providing liquidity or staking?
Colin - We have yet to work it out. We are looking at something like a Curve distribution.
Jakub Panik - Fees will be collected in some treasury or there may be a secondary treasury and when we have this treasury we can open access to it and people will be able to receive part of these funds.
Jakub Gregus - Fee allocation is a very complex process because you don't want to turn your token into a security and provoke the regulators. Regulators can be very sensitive to this, pure reallocation of commissions is a clear digital analogue of securities, so the regulators will fine us or close our activities. In cryptocurrencies, you can get away with ransom, which is a kind of indirect redistribution or burn model. There is also the Curve Escrow Model, which we see becoming the gold standard in the industry. Many protocols that use these mechanics are reviewed with their lawyers and everything looks fine.
You need to do some work and this work materializes through the right to vote. This is the essence of the Curve model, which for us is the best candidate for implementation. The entire community is already familiar with this model. At least for once we will not shock and mislead people.
Jakub Panik - Regarding the technical committee. We would like to create a technical council because of the regulators. We need to create a committee that can help us add new tokens or maybe we will find the necessary mechanism that can reduce all risks when adding new tokens to Omnipool. This must be resolved first.
Jakub Gregus—Some networks provide Compound services. They measure the liquidity properties of assets and then, based on the results and models, set collateral ratios, interest rates, and so on. I would also like to create an economic council, not just a technical council. The economic council will support issues related to the economy of tokens, what should be the volume ratio, what should be the size of commissions, what part of the commission should be taken, how to use liquidity. Maybe the HydraDX management can redirect some of the liquidity into DOT tokens and participate in crowdlons, and use the rewards from these crowdlons as liquidity for HydraDX and process it. We will support all these and other economic issues with the technical committee, which will be responsible for the technical safety of these assets. I hope that this committee will be made up of the best people in the industry and I will gladly vote for a very good salary for them.
Question - When we move to the mainnet, the fear of fines (slash) will become real in the mainnet. It's not that scary on the testnet, but once the mainnet goes live, it'll be real. I hear a lot of comments on Discord about some validators not getting nominators. Will it lead to fines? Will it be a problem? Will it make people afraid to nominate? Will this reduce the amount of tokens staked? Because we need more validators to strengthen the network. If the validator goes offline, will he be fined?
Jakub Panik - Regarding the last part of the question, you can go offline and update your node and validator. It is not forbidden, on the contrary, it is encouraged. As for the first part of the question, we probably won't have staking as it is now because the shared security of Polkadot and Kusama means we don't need to maintain it. We thought about staking and using it on the mainnet, but we don't need staking at the very beginning. We only need node holders (Collators - Collators), we do not need a certain number of people doing this, as it is happening now. This is a very big problem because we need to figure out what to do with the people who are doing the validation on the testnet right now, but we have a solution that is probably too early to tell, but we want to decentralize the infrastructure. Not only the internal infrastructure, but also the node infrastructure. We may have incentive schemes for node holders supporting our infrastructure, but it's still too early to tell. We'll refine these ideas as the mainnet launches.
Jakub Gregus - In this case, the penalties will not be as hard or even they will be impossible. Worst case, the node will be unavailable, in which case the user will automatically switch to another node. Listening to you, I understand that there is still a lot of confusion in the Polkadot and Substrate community regarding the role of validators and node holders (Collators), because parachain validators are Polkadot validators. As a parachain, you don't need to have your own validators, you need to have collators. It can be 1 node, 10 nodes, 100 nodes, it doesn't matter, but these collator nodes cannot make invalid network state transitions, so their security properties are not as important as the security of the validators. Collators and full nodes are more service providers than POS validators.
Substrate offers a super powerful feature called off-chain worker nodes (off-chain workers) and it can be used to do a lot of useful work that can be done in parallel or done offline, but they still need to have some share of the tokens at stake to ensure proper validation. This data can be added to the network. These nodes will have staking enabled and these nodes can match off-chain transactions, so it doesn't have to be done entirely on the blockchain, it could be like zero knowledge proof to save a lot of space, etc.
Jakub Panik - It could be data providers. The middleware nodes that provide the data, they can have some share of the HDX tokens staked and prove that my data is correct or if it is incorrect they will be penalized.
Question - My question is about your travels and how you feel about the ecosystem as a whole. What is your view on Substrate after working with it for 1 year. Has it become more or less powerful, in your opinion? And how do you see interaction with other teams? I know you mentioned kUSD as a starting point in Omnipool. What are Acala's incentives to encourage HydraDX when they have their own swap functionality? How do you see yourself in the ecosystem and at what level is your spirit and the spirit of your team?
Jakub Gregus - That's a great question. For example, in early 2020 when we started developing on Substrate with HydraDX, we had a lot more concerns about the ecosystem. There were not many parachains, at least not seen or planned. Now this is no longer a concern. On the other hand, I personally do not see a large flow of new parachains, to be honest, this is my opinion. On the other hand, interesting Substrate networks are very specialized and have very unexpected use cases. I'm much more optimistic about Substrate than I've ever been in conversations with many other developers over the past year. Especially talking to other developers from other ecosystems, when I see how it is implemented, how many ideas or parts of Substrate are implemented in other networks. Obviously the Cosmos SDK is more interoperable because it's much older and field proven, it didn't have any stressful changes in 2019 or 2020 so a lot of projects have moved there, it's a more stable development solution. But when you see that Polygon uses Substrate to access data in its network or Octopus Network on Near protocol uses Substrate to connect specific application networks to Near or Compound chooses Substrate.
Also I see that other Cosmos developers have a lot of respect for Substrate and some of them even choose Substrate for their latest projects or Ethereum developers who are developing an ETH 2.0 client are actually moving to Substrate and are very happy and very happy in particular , off-chain worker nodes, and other Substrate features, especially runtime updates.
A few weeks ago I was talking to the guy who organized and did the first hard fork for Cosmos. We agreed how difficult it is to implement this on any network or system, while any Substrate network, be it Polkadot or Kusama or whatever, hard forks just like that, even every 2-3 weeks, it is simply unthinkable by comparison. with other teams and other projects. I see this as one of the underrated features because there were some mantras that the blockchain should be as immutable as possible, but I don't think that's true. Even Bitcoin in 2018 or 2019 had very critical inflation bugs and you wouldn't expect the old underlying code and simpler blockchain to have such critical flaws that should be fixed as soon as possible and they were super critical. After all, you still need people to maintain the software, to maintain the network, to provide power and electricity.
Every network is still just a tool for people to coordinate and get things done, so you can't just disappear and say, "Yeah, it's the same software." No, it needs to be maintained and that's why I think runtime updates are a very elegant way to evolve and evolve into something better over time - it's the coolest and killer feature Substrate has to offer.
In terms of the ecosystem, meeting many founders makes me more calm and confident, especially the developers make me more calm, as I realized that many of them are super talented people who are moving to Substrate or Polkadot from other ecosystems such as Algorand , Cosmos and they are really wonderful people, not only in terms of knowledge and experience, but also as individuals. Obviously, there are people who are not so cool, who do not have honest intentions to build something great, rather to make quick money, but such people are everywhere. I'm much more confident and satisfied ecosystem than before. On the other hand, the ecosystem has many drawbacks because it is one of the most difficult ecosystems to develop, Avalanche and everything that uses the EVM works great. Of course, you can fork anything that lives on Ethereum and it will work great and just use Metamask.
Using all the infrastructure and tools already built helps any developer to be super fast and deploy projects in weeks or months, while Substrate, Polkadot doesn't have a lot of tried and tested code that you can fork and build something on top of. then improve it. Another thing I could mention is the instrumentation issue, since Substrate and Polkadot are changing too fast, which also caused a lot of delays on our end. Many of the tools and infrastructure were outdated and didn't work very well. It's been very hard in 2019, 2020 and 2021, but it's just a pain in the ass for any new ecosystem. Solana last year or 2020 was very similar, it was very difficult to develop and run something on it. There was no code for the fork, there was no toolkit. It had to be created from scratch. This is also a great opportunity for anyone who can see these problems and instead of complaining, just find solutions and ask for a grant from the treasury to implement these solutions.
The treasury at Polkadot and at Kusama has never been as full as it is now. I think this will finally push a lot of people to work in the ecosystem. So there is everything in the ecosystem as a whole, good things and bad things, this applies to every ecosystem. I've talked to many founders from other ecosystems and everyone complained about something. As always. The whole crypto industry is still at a very early stage, even Ethereum is still at a very early stage of development. There are so many things that are left out or not well done. I think we should see this as a great opportunity, not as a hindrance, and provide support to people who are building ecosystems.
We are at a stage where we need to build an ecosystem and we are moving very strongly in this. Now a stage that I didn't expect in 2019 or 2020, there was much more doubt, but now that we see parachains and XCM launched, all these things work and all the FUD from competitors no longer matters. This is no longer the case, the ecosystem is working and now we can finally move on to the Polkadot and Substrate phase, when most of the code has already been written and implemented, developers can only optimize and improve it.
I'm more optimistic than ever, but other projects should be more about working together as a family rather than trying to build everything on their own. Otherwise, there will be no ecosystem, if they so desire, they can go and build their own ecosystem. It would be great to see better collaboration between the teams, but so far it hasn't happened because everyone was too busy developing their own projects. Finally, more teams are getting to the stage where they are trying to integrate, for example Interlay is doing a lot of integrations, Moonriver and Moonbeam are launching XCM, Acala and Astar and a few other top teams are getting to the stage where they are doing integrations with other projects. We are approaching the stage of an ecosystem, not just a collection of individual networks.
Jakub Panik - I had a conflicted relationship with Substrate. First love, then hate, then love again. Right from the start, Substrate allowed us to roll out the blockchain very quickly, which is unprecedented. We've tried using more developers, we've worked with more developers before, but this was the fastest solution we've tried. After that, we had to do something special and it was very difficult because there were a lot of updates and it slowed us down a lot. It was a growing pain, made worse by the need to launch Polkadot and Kusama. Parity and other developers needed to quickly solve problems and quickly launch the network, I understood this and a few times when we had problems, I dived deep into the Substrate code and thought: “Oh shit, there are millions of lines of code in it”, but this doesn't look like bloated code, it's really good and concise code with research and documentation underneath. I think I saw some blog where Gavin Wood wrote that Substrate has 2 million lines of code, which is a huge help if you have a framework that you can use and it's really powerful. You can build on its base, you can change things as you see fit. Deep diving takes a long time because things change, but right now we're talking to Robert Habermeier and the people who make Substrate. Basically, they are done with the basic functionality. It will not change much, so if you want to start creating something, then there is no better time. We know a lot about this, because we started earlier, but you won't There are no problems like ours. I want to inspire all the founders who were confused by the large number of changes that slowed everyone down, this is no longer a problem. Most things are done and ready, if you're thinking about creating something you should do it right now and I think it's really powerful. I don't know the best option for creating a custom blockchain right now. We are constantly looking for other solutions, but everything has its pros and cons, but it's a really good ecosystem with professional developers and support.
Q - You talked about the Web3 Foundation grants you received. How are things going with the Web3 Foundation. Do they provide support for eg Gavin Wood for HydraDX? Does this mean that the whole ecosystem is waiting for Omnipool and sees value in it? What do you think?
Jakub Gregus - I don't know how aware Gavin Wood is of us, probably yes. We don't talk much with the Web3 Foundation, more with Parity. The Web3 Foundation funds and organizes the research, but the research is ultimately implemented by Parity. Parity is a more important player in the ecosystem in a sense, and it would be better for projects to be closer to them. The grant program is useful for newbies who see some opportunities for tool development or if you want to develop something specific that might be useful to others and that will lower your development costs a bit, but you can't rely on grants only if it's something important, like an ETH bridge or a BTC bridge. We had a validator monitoring grant in 2019 but it didn't get renewed for some weird reason, it got us really pissed off at the start and then got us moving forward, we decided to build something that doesn't depend too much on grants. If you want to create something that is missing in the ecosystem, asking for a grant is one of the best ways you can do it, you can also get some guidance from them on how to implement it. Are you talking about the Subaction Grant?
Question - I would like to know if you cooperate with Parity? Do they expect a product from you and see its value for use in the ecosystem?
Jakub Gregus - This organization needs to be very careful what they report from the outside, they need to remain neutral, they don't want a lot of hype or reporting on specific projects so as not to give the impression to the community or retail people that this is the chosen and correct project, where should you invest your money. Sometimes some projects in the Polkadot ecosystem and other ecosystems use this hype or marketing - "Oh, we get a grant from the Web3 Foundation, they support us", which is very doubtful.
Question - I mean this is not a regular DEX or AMM, it has to be a breakthrough technology. Will the other party appreciate your ideas?
Jakub Gregus - I personally think they don't understand how revolutionary or innovative the project is because most of the people who work for the Web3 Foundation or Parity are not DeFi experts. They are experts in cryptography, Rust, back-end development, networking and other areas. So when they see an Omnipool or something like that, they can't tell how it's better than Sushiswap or another XYK model.
Jakub Panik - We work with these people at Parity and we love them not only as developers but as people in general. They are very good guys. I think they have some confidence in our project and what we are doing.
Jakub Gregus - When we ask them to review the code for us, they review it and then tell us that they didn't find any major bugs, only minor cosmetic issues. They tell other colleagues about it and spread a good word about us when they see other solutions for the infrastructure they are also trying to develop. They know how difficult it is with much more resources, and we did it with less. They can only help with what they are familiar with. They helped us when we launched on the Rococo testnet or when we made the first XCM.
Jakub Panik - This is not the best question for us, because me and Jakub, we hate bragging rights. It is better to ask Parity.
Jakub Gregus - Unfortunately, in my opinion, they don't think much about how it's better or how it's different from other projects, they just don't have time to think about it. There is a lot of pressure on everyone at Parity and the Web3 Foundation to have Polkadot and Kusama fully up and running and successful because everyone expects it to be the more successful or one of the most successful ecosystems because Gavin Wood created Ethereum. They have a lot of work.
Q - Do you want to implement fractional, algorithmic pricing for LHDX?
Jakub Gregus - This is an impressive and interesting question. For example, when we started working with Blockscience in December last year. One of their first questions was: “Do we want to make a stablecoin out of LHDX? It would be significant but simplified a lot of things.” We thought it was an impressive idea, but we don't want to make a stablecoin because it has too many nuances. While we were thinking about it, maybe not a stablecoin, but some asset like RAI Reflexer, which would be very impressive. In addition, the problem that can arise with USD stablecoins or other fiat currencies is starting to develop and inflation is a very serious problem. So I'm not sure if stablecoins would be attractive, maybe some Anchor assets associated with them. If you look at the FEI protocol, it's very similar to HydraDX, but they don't have their own DEX or Terra is very similar in some ways, but having a native token as collateral is very dangerous. In May, we saw Terra almost enter a death spiral. For this reason, they are thinking about introducing BTC as another asset to their pool. These ideas are relevant and very interesting. This would allow for the creation of a stablecoin that can be highly scalable. Also, it could have very organic demand as it would stabilize the pool, but right now it's very futuristic. It's possible, but it's not our priority. If anyone wants to work on it, we'll be happy to support and maybe even fund it, but it's not part of our agenda for the day. Let's start HydraDX and Basilisk first. We have a lot of ideas or features that need to be rolled out to the HydraDX and Basilisk network.
Jakub Gregus - I just want to take a look at last year and thank you all for your exceptional patience. We didn't communicate well and often left you in chaos. This was not done on purpose, no one was hiding, no one was spending your money. Thank you all and we look forward to your support as we finally launch our mainnet!
January 13, 2022
HydraDX is a Basilisk cross-chain liquidity protocol on Kusama founded by Mattia Gagliardi and Jakub Greguš.