Tag Archives: golang

Golang contribution “journey”

Golang has probably the worst contribution model among every other project I worked. Even linux kernel is kawai compared to this.

They have finally merged my wine time fix into runtime/windows subsystem, which took 3 month of negotiations and required 2 almost full rewrite and more importantly, Google’s code contribution policy.

Wine does not support memory mapped update of the timer structures – it is similar to linux vsyscall (or linux copied it from windows), i.e. kernel timer periodically updates some fixed address which can be mapped into userspace and read without using syscall mechanism which is noticeably slower. Wine does not have it (and frankly it can not, since it runs in userspace), but the whole golang windows port is based on this windows feature. I implemented a fallback mechanism which uses plain windows syscall to get the time and QPC counters to implement monotonic time.

But this post is not about technical details of the time subsystem in golang, but instead about stupidity of the google’s contribution policy.

Google contribution policy requires you to have a google account, which it may not create for whatever obscure reason, it just fails and nothing moves forward. No oauth as an industry standard, but only ugly google’s own account. I wonder whether it will force me to use google+ to login.

If you ask another person to contribute into golang on your behalf, they do not accept your patch, since they do not know whether you signed or not contribution agreement (event though the person who sends the patch and authors it does sign this shit). Google forbids to discuss and review patch on github, you can only create an issue with complain, they force to use Gerrit and follow golang’s own horrible contribution steps based on single-commit approach in git.

Technical management at google (and frankly many other places) breaks the whole idea of the fun behind working with open source project.

Please be not like google.

golang: shine and depression

Just leave it here

$ export GODEBUG="gcdead=1"
$ go
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x0 pc=0x6d4618]

goroutine 1 [running]:
regexp/syntax.(*parser).collapse(0xc20805e000, 0xc208030088, 0x6868686868686868, 0x6868686868686868, 0x13, 0x6868686868686868)
	/usr/local/go/src/regexp/syntax/parse.go:376 +0x2b8
regexp/syntax.(*parser).factor(0xc20805e000, 0xc208030080, 0x6868686868686868, 0x6868686868686868, 0x0, 0x6868686868686868, 0x6868686868686868, 0x6868686868686868)
...

The latest stable go 1.4.1.

gcdead=1 should tell garbage collector to get rid (‘clobber’) of stack frames that it thinks are dead. Apparently either Golang GC thinks that stack being used is dead, or there is stack overflow (like pinning pointers or something like that)

Elliptics, golang, GC and performance

Elliptics distributed storage has native C/C++ client API as well as Python (comes with elliptics sources) and Golang bindings.

There is also elliptics http proxy Rift.

I like golang because of its static type system, garbage collection and built in lightweight threading model. Let’s test HTTP proxying capabilities against Elliptics node. I already tested Elliptics cache purely against native C++ client, it showed impressive 2 millions requests per second from 10 nodes, or about 200-220 krps per node using native API (very small upto 100 bytes requests), what would be HTTP proxying numbers?

First, I ran single client, single Rift proxy, single elliptics node test. After some tuning I got 23 krps for random writes of 1k-5k bytes (very real load) per request. I tested 2 cases when elliptics node and rift server were on the same machine and on different physical servers. Maximum latencies with 98% percentile were about 25ms at the end of the test (about 23 krps) and 0-3 ms at 18 krps not counting rare spikes at graph below.

elliptics-cache-rift-23krpsRift HTTP proxy writing data into elliptics cache, 1k-5k bytes per request

Second, I tested a simple golang HTTP proxy with the same setup – single elliptics node, single proxy node and Yandex Tank benchmark tool.

I ran tests using the following setups: golang 1.2 with GC=100 and GC=off and golang 1.3 with the same garbage collection settings. Results are impressive: without garbage collection (GC=ff) golang 1.3 test ran with the same RPS and latencies as native C++ client. Although proxy ate 90+ Gb of RAm. Golang 1.2 showed 20% worse numbers.

elliptics-cache-golang-1.3-gc-offGolang HTTP proxy (turned off garbage collection) writing data into elliptics cache, 1k-5k bytes per request

Turning garbage collection on with GC=100 setting lead to much worse results than native C++ client but yet it is quite impressive. I got the same RPS numbers for this test of about 23 krps, but latencies at the 20 krps were close to 80-100 msecs, and about 20-40 msecs at the middle of the test. Golang 1.2 showed 30-50% worse results here.

elliptics-cache-golang-1.3-gc-100Golang HTTP proxy (GC=100 garbage collection setting) writing data into elliptics cache, 1k-5k bytes per request

Numbers are not that bad for single-node setup. Writing asynchronous parallel code in Golang is incredibly simpler than that in C++ with its forest of callbacks. So I will stick to Golang for the network async code for now. Will wait for Rust to stabilize though.