Some time ago I deployed a token contract called BitEther Coin (BEC). The idea is simple — an ETC miner participating in this experiment is getting an additional BitEther token reward to Ether reward received from a block. The Miner gets both Ether (5 ETC per block) and BitEther (2 BEC per block).
This is not a modification of the protocol, nor is it a hard fork or soft fork. It is just a standard feature provided by the existing technology and capability of the ETC system.
By doing this I am trying to show that Ethereum Classic is not the same chain as others: it has more powerful technology which allows you to build your own blockchain layer on top of it. Security of the network can be supported by any participant or by any business building on top of the chain.
BitEther Supply Model
The BitEther Token follows the Monetary Supply of Bitcoin. My initial goal was to make it issue 50 tokens every 10 minutes, with a halving every 4 years.
In practice, BitEther «big block» time is less that 10 minutes, because an additional goal was to reach 99% of total token production at about same time as Bitcoin will. As a result, BitEther issues 50 BEC coins every 6–7 minutes, and halves every 3 years.
By microservice architecture here I mean an app split in independent parts, connected via MQ, database, internal RPC or independent APIs. Usually deployed as containerized microapps onto a horizontally scaled cluster.
Main problems with ORM
Different technology stack
Most likely you have a mixed tech stack, using different tech/framework/language for different services. Some parts are JVM apps, but others are easier to write using Golang or Python. In some cases you use an external 3rd party app, customized for your needs.
Lifecycle and backward compatibility
You probably don’t want to restart whole cluster, every single service, just to upgrade some part of the db layer. So you have to deal with different versions of Data Models, that should live together in same shared environment (remember «caching»)
Subset of data
Another important moment that each microservice operates own subset of data, and does different types of operations against the data. It’s even common to split app into microservices that do only following:
- put data into db from a 3rd party storage
- process/transform stored data
- display data for end user
Please note that this post was written long before Hard Fork happened, it wasn’t clear how it will be implemented, there were no final decision at the moment of the post
I’ve never invested in The Dao, and even was against this idea.
I worried because people just gave money without understanding in what they’re investing. Too many people decided to risk their money just because they thought that it’s cool, without any due diligence or anything.
I’ve been using Google Cloud since the yearly days, I think it was year 2009 when I’ve deployed my first website to the App Engine. Since then I was doing consulting, made many projects, experiments, and so on. For past few months I’ve been working on TipTop.io, it’s a SaaS solution for analytics and monitoring for applications hosted on Google Cloud Platform.
The project in the early stage currently, and it supports only Google App Engine at this moment. But support for Cloud Containers/GKE is coming soon (and probably plain Kubernetes in near future).
A quick note about office desks, seats, people and team spirit.
I’ve spend almost 15 years on software development, saw different companies, different people, different teams and different offices. Maybe hundred of them. Where I used to work, or my friends did. Some offices were small, some large. Cubicles, rooms and open spaces. Noisy and quiet. Cool startups and boring enterprise. Sometimes different offices that belongs to same organization.
At some point I’ve noticed a small difference, a difference in team members as I think is correlates with difference in how theirs work desks are placed. Remember when the same people have moved from one office to another, they started to conduct themselves differently.
Let’s Encrypt is a new Certificate Authority: It’s free, automated, and open.
“Let’s Encrypt” is a really great initiative (and a tool) that, I hope, will improve security of the modern web. It have very nice client, that will do all work automatically (unfortunately it’s not yet supported by Google Appengine). It’s supposed to run on target server, where it can validate domain and configure your Apache/Nginx/etc. But in of Appengine we don’t have such server, so have to generate and upload SSL certificate manually. I’ll show you how.
Google Cloud have a nice Log Service, with some cool features (like Traces, I wrote before), but it lacks real analytics based on this logs. Like Kibana.
Fortunately Google Cloud can export logs to Cloud Storage. What’s cool is that this logs are in JSON format, so we could easily import them into ELK, without any complex Logstash configuration (honestly I cannot say that JSON schema fits well ELK, but still it’s easy to import).
I’ve prepared a basic Docker container with Elasticsearch, Logstash and Kibana configured for Appengine logs. Run ELK container with:
(you can get sources of this Docker image there).
Fixing webapp speed is really hard job, mostly because it’s hard to find a bottleneck. And I want to show some tools that Google Appengine gives you for this job. Actually i’m going to tell about combination of two tools, that works just perfectly together.
First thing is Traces (under Monitoring tab in Cloud Console). It’s kind of a new tool, and I didn’t pay much attention earlier, just played a little. I thought it’s just another view to your logs, from Appengine APIs point of view:
It shows you details about requests, with information about which API calls were made, how much time server spent on them, how much it did cost, etc. Pretty useful information btw.
For many years I’ve been working in distributed teams, for different companies, for different projects. And there is one important thing, that distinguish one team from another. It’s how team meetings are organized.
I mean “Morning Standup”, “Weekly Standup”, etc.
For traditional (non-distributed) company it’s easy, just get together in the morning. According to schedule or just when everybody are ready to talk.
Fixed schedule is much more important thing for a distributed worker. For distributed team there is no morning, just Skype and different timezones. And also, person on other side usually needs some preparation before call. Turn off music, wear headphones, turn on microphone and camera, etc. Get dressed :)
I don’t need to say that RESTful web services are very popular this days. So, you need it, for sure, but what to choose? I’ve tried different Java frameworks for REST, most times it was Jersey and Spring MVC, and think that for most cases Spring is the best option for building RESTful applications using Java.
If you already have a Spring app, then you don’t need to make complex configuration to start implementing RESTful API with Spring. Just configure view resolver for JSON, and use standard annotations like: