Sunday, May 08, 2016

Google Code Jam 2016 in Lisp: Round 1B/1C

After failing to qualify in round 1A in this year's Google Code Jam, I had two more chances to advance to round 2.

Round 1B

In round 1B, I only managed to solve the first problem, Getting The Digits.  It took me quite some time, but eventually I found the recommended approach to count letters that are specific to a single digit (e.g. Z occurs in no other digits than ZERO), then eliminate those digits; initially all the even digits have unique characters (0:Z, 2:W, 4:U, 6:X, 8:G), but after those have been eliminated, this holds for other digits as well: (1:O, 3:T, 5:F, 7:S) and finally (9:I).  The program is ugly in that this sequence of letter/digit combinations is hard-coded; a good program would extract the letters by itself so that it could be ported more easily to other languages etc.  My Lisp code is here.

Anyway, both of the other problems resisted my analysis.  I should have been able to solve Close Match, but I couldn't decide whether it should be solved from left to right or right to left, and ended up spinning in circles.  The final problem, Technobabble, is beyond my current capabilities: I neither succeeded in mapping the problem description to the correct graph problem (minimum edge cover on a bipartite graph), nor would I have been able to code an efficient solution to that problem.  As an undergraduate I was good at these kinds of things, but lacked practice over the past 20 years or so.

With only one problem set solved, I ranked #4259 and was very far from qualifying.  The other Lispers didn't fare much better in this round, except for orivej, who advanced by ranking #847.  He or she only used Lisp for the first problem, and Python for the third.

Round 1C

Again I was able to solve the first (and easiest) problem, Senate Evacuation.  There are many viable approaches, and it took me a bit too long to find one.  Here's my slightly embellished A.lisp (raw submission).

After 35 minutes, I attacked the second problem, Slides! After lots of scribbling of graphs and diagonal matrices, I found a method to construct slides for an arbitrary M below the maximum that maps nicely to binary numbers.  The resulting code (B.lisp, cleaned up from the actual submission) is short and sweet and even turns out to work.  But again it has taken me much too long to get there.

With only 16 minutes left, I looked at the final problem, Fashion Police.  I knew I didn't have the slightest chance to solve the problem properly, so I started coding a brute-force algorithm that might solve the small dataset.  But that was not as small as it looked, and the problem not as simple as I thought, so I failed.  Well, anything else would have been a miracle.

My final rank was in this round #1326.  I didn't qualify, but I was quite happy with this honorable (for me) result.  To qualify, I would have had to finished my two working programs in 1h39'30" instead of 2h13'46".  Maybe with more practice I'd be able to save those 34 minutes somewhere—but it looks like a long shot!

Looking at the other Lispers in the contest: This round saw an awesome result from lpetru, who handed in a perfect score (100 points, all problem sets solved), landing him or her on rank #98.  And so far he or she submitted all solutions in Lisp.  I know whom to root for in round 2!

Final remarks

Again it was lots of fun to participate in Google Code Jam.  It was exhausting and sometimes frustrating (especially in round 1B).  But it's good to have those old brain cells work outside their comfort zone.  So I'll probably try again and hope that I'll eventually manage to reach round 2.  See you next year!

Sunday, April 17, 2016

Google Code Jam 2016 in Lisp: Round 1A

Having survived (like 96% of all Lispers and 81.5% of the wider population) the Qualification Round, I'm trying to pass the first "real" round. As expected, things are getting much more serious now! Round 1 is held in three sub-rounds, and you can participate in each, or until you advance to round 2.  In each sub-round, the 1'000 highest-ranking contestants advance to round 2.

Where the qualification round had consisted of four problems (subjectively, two easy and two somewhat tricky) with 27 hours time to solve them, round 1 is three problems (an easier one and two quite tricky ones) that have to be solved within 2h30.  From previous participations I knew that I would likely run into the time limit.

Getting off to a good start

Although the first sub-round, Round 1A, started at 3AM in my time zone, I decided to give it a try.  I went to bed somewhat early and asked one of my sons, who was going out, to wake me up a bit before three, which he dutifully did.  (Thank you!)

The first puzzle (The Last Word) went quite well.  Somehow I convinced myself quickly that it is sufficient to put the next letter in front if it is greater than or equal to the first letter of the current substring, and at the end if it is smaller.  My straightforward Lisp implementation can be found here.

The second task (Rank and File) was cute.  After a bit of scribbling and thinking, I suddenly realized that it's sufficient to look for numbers that occur an odd number of times, sort those numbers and be done with it.  I coded this up fairly quickly as well (Lisp source here).

So I was done with the first two problems after 44'22" with no penalties.  I was also reasonably certain that I got both the large input sets right as well.  (In the Qualification Round I had wrongly believed so as well, but in the critical case of problem C, the difference between small and large input was qualitative rather than quantitative.)

Problem C: close, but no cigar

With 1h45 left, I attacked the last problem, BFFs.  This is a great problem—very simple in its formulation, yet requiring good (but not impossible) analysis, and also a bit of programming skills.  It would be a nice recruitment interview problem.

This is pretty clearly a graph problem.  The input set is a rather special set of directed graph with out-degree 1 (the BFF function).  This must somehow be mapped to another kind of graph that represents a circle with some constraints.  But this mapping is not that straightforward.  Studying the sample inputs on the problem page, I quickly found that there are two different types of circle: One is a complete cyclic sub-graph of the input—that's easy to find given the simple type of input graph.  The other possibility is to have a "core" circular graph of two nodes (a "couple" of mutual BFFs), with "extensions" of other nodes (kids) that (BFF-) point towards the nodes of the core.  I wrote the code for both cases and tested it successfully against the tiny example on the problem page.  My code looked for the maximum cycle, and the maximum graph of the second type, and would return the size of the bigger one.

To my great disappointment, the system told me that my solution for the small input set was incorrect.  I couldn't figure out what was wrong, and time ran out on me.  It was 5h30, and I had finished at rank #2137, with correct solutions for the first two problems and none for the third.

I went to bed slightly frustrated, and of course couldn't sleep right away but continued to think about the problem... what if the entire class consists of happy couples, i.e. mutual BFFs? Then I could put everybody in the circle.  Hmmm... that's a case I had somehow missed! In fact the second type of graph (couple with "inbound" extensions) can occur arbitrarily often in the circle.  Having understood my error, I could finally sleep.

In the next morning I had to write the code, although the contest was over.  Although I had coded (correctly, I think) a good part of the necessary logic the night before, it took me several hours to get the algorithm right for the actual solution.  I was too ambitious and wanted to write a really efficient solution ("early optimization..."), although a stupid O(N2) approach would probably have worked even for the maximum problem size of a 1'000 kids class.  Probably I was also tired.  Also, I saw that I don't know how to do proper debugging in SBCL, and had written the code in a way that made debugging somewhat hard.  Finally I got the code right, see C.lisp here.  Although the code is not particularly compact, I'm happy with it because at least I can read and understand it (for how long is another question), and it is O(N) with a pretty low constant factor for what that is worth—it runs in 20 milliseconds on the large practice set including I/O.

What about the other Lispers?

The per-language dashboard for Lisp on shows that none of us advanced in sub-round 1A.  A very honorable mention goes to vii, who ended up on rank #1019 with correct solutions for all problems and input sets, but with a few penalties landing him or her 2'33" above the qualifying threshold.  Good job & better luck next time! I'm sure he or she has what it takes.

Will I continue to use Lisp (and SBCL)?

While coding the last problem (BFFs), I sometimes thought that the program would have been easier to write in another language, even in C.  I couldn't find an elegant recursive approach, and I always found writing iterative code in Lisp a bit awkward; I never really mastered loop or any of the other "modern" iteration constructs—why learn them when do will do? But the primary reason why I would have preferred another language was data structures.  I built my graph from "node" objects (expressed as defstructs) with pointers, which was natural enough and worked well.  But in another language I would have used a convenient vector representation... in Lisp the syntax for dereferencing vectors is so verbose and awkward that I was worried getting things wrong.  And I felt that in the short time, I couldn't write a nice data type abstraction to hide the intricacies of such a vector representation.

What about SBCL? As mentioned before, I normally use Franz Inc's excellent Allegro Common Lisp, but I wanted to use a "real" open source system for the contest.  I'm slowly getting used to SLIME after more than 20 years of Franz's Emacs-Lisp interface.  But when I had to debug my final BFFs solution, I felt almost blind because I wasn't able to look at local variables from within a backtrace, something that works quite nicely with Allegro.  So I had to resort to the equivalent of printf()s (warn), which made me feel a bit dirty.  Probably I should learn how others debug under SBCL, I'm sure there's a better way! [Edited to add: (declaim (optimize (debug 3))) seems to go a long way!]

In the weeks before the contest, I had toyed with the idea to use Go this year, but I didn't have the time to learn the language well enough to make this viable.  And overall, Lisp has treated me quite well this year.  Also I've been using it for about 30 years now.  So I don't feel like quitting mid-contest.

There are still two opportunities for me to try to advance to round 2.  Timezone-wise they are much more convenient for me.  On the other hand, I need some luck to find a good solution for the medium-tough problem as quickly as in 1A, and a bit more to find a good solution for the tough problem at all.  But I'm an eternal optimist...

Sunday, April 10, 2016

Google Code Jam 2016 Qualification Round in Lisp

This year I registered for Google Code Jam again. The qualification round was this weekend, and I submitted solutions in Lisp.  This time I used SBCL with GNU Emacs and SLIME as the IDE, rather than Allegro Common Lisp with GNU Emacs and Franz Inc's emacs-lisp interface.
The puzzles were lots of fun, as usual.  A (Counting Sheep) and B (Revenge of the Pancakes) were easy.  When I looked at C (Coin Jam), I found it tricky, so I put that aside and did D (Fractiles) next.  Explaining the problem to my 19-year-old son helped me find a good approach.  After implementing it and submitting the solutions, I turned back to puzzle C.  My solution looked complex, and I suspected that there would be a more elegant way.  Still I was happy with the solution as it produced a result for the large set very quickly, so I submitted the results and was done. My solutions can be found here.

So how did I fare?

The round went on for 27 hours, and I started somewhere in the middle due to timezone.  So after I was done with my solutions, I had to wait until the next day to see the final score.  The "small" problem datasets are checked immediately after submission, so I already knew I had all of those right before I went to bed.  Since they already accounted for 37 points, and I needed 40 to advance to the next round, I could be pretty optimistic that I had passed—having any large dataset correct would get me more than the three missing points.
The intermediate scoreboard put me on rank #1189; this is assuming (for everybody) that all the "large" solutions that were submitted would be correct.  I was pretty sure that mine were all correct, so I had hopes that my final rank would be even higher.
But my final rank was #2748, because it turned out that my "large" solution for D (Fractiles) was incorrect.  This was a big disappointment, because I was very confident in my solution approach.  And it turned out that my error was a stupid one in one of the "easy" parts of the implementation; the decision whether the number of grad students (S) is sufficient to clean enough tiles in the base-length-K and complexity C artwork to detect whether there's any gold tile.  The correct test would be
  (if (< (* s c) k)
...but I used:
  (let ((min-s (truncate k c)))
    (if (< s min-s)
My test was too lax, and I sometimes output a "solution" when I should have printed IMPOSSIBLE.  This stupid mistake cost me >2000 ranks.  It would be nice if I managed to avoid this in the next round.  But realistically I'll make even more of these mistakes because the problems will be harder and the time pressure much higher.

How did the other Lispers do?

On there are statistics about GCJ participants and submissions, and it's easy to find all submissions for this round that used Lisp.  We Lispers were quite successful this weekend, with 26 out of 27 proceeding to the next round, a success rate of 96% compared to 81.5% for the entire population.
The highest ranking Lisper was Ipetru at #594 with all solutions in Lisp (and of course all correct).  I looked at his solutions for C and D, and they are so incredibly compact that I couldn't believe my eyes.  D used the same approach as I had, just very elegantly written—the code proper is about two lines; much harder to hide stupid mistakes in there! C used a completely different approach, deterministically generating Jamcoins rather than checking candidates as I had.
The second-ranking Lisper was DarkKnight. at #720.  He only wrote one solution in Lisp.  In fact he used different languages for all solutions, and I mean all 8 solutions, not all 4 puzzles! bc, OCaml, Lisp, Racket, Lua, Perl, Octave, R.  Impressive! :-)

Friday, February 13, 2015


Panopticlick illustrates that even without cookies etc., modern browsers usually send enough specific-enough information to Web servers so that you can be tracked. The various useful standard HTTP headers effectively work like a fingerprint of your browser/OS/hardware combination.

When I look at the Panopticlick results of my favorite browser (I get 22.26 bits of "identifying information", while more senior users have reported scores up to full 24 :-), one thing that stands out is a long list of "System Fonts". Arguably it is useful for me when Web sites know what fonts I have installed on my system, so that they can present Web pages with fonts that I actually have rather than having to send me the fonts as well. So the intention is good, but the implementation discloses too much of my typographic tastes. What can we do to fix this?

Well, that should be quite obvious: Instead of my browser sending the full list of fonts, it could send a Bloom filter that matches the fonts that I have. When a Web server wants to render a document for me, it can check for some candidate fonts whether I have them. Bloom filters are approximative and will sometimes generate false positives, but one Comic Sans Web page in 1'000 or so should be a small price to pay to get my virginitprivacy back.

You may respond that a priori the Bloom filter discloses as much of my privacy as the full list of fonts. But! I can simply send a new Bloom filter ("salted" differently) to each site I visit. Voilà how I'll defeat all traceability of Web visits, undermine the business model of the Internet economy, and destroy the entire Western civilization. Muahaha!

-- BloFELD
(Bloom Filter-Enabled Limited Disclosure)

(Apologies to smb and Bill Cheswick, who seem to have fully baked and published a better version of this idea in 2007, and to the probably numerous others)

Friday, February 06, 2015

Ceph Deep-Scrubbing Impact Study


I help operate two geographically separate OpenStack/Ceph clusters consisting of 32 servers each, of which 16 (per cluster) are dedicated OSD servers. Each OSD server currently has six OSDs. Each OSD runs on a dedicated 4TB SAS disk. Each server also has SSDs, which are mostly used for OSD write journals.

We monitor our systems using the venerable Nagios. My colleague Alessandro has written many specific checks for infrastructure services such as Ceph. Some of them periodically check the logs for possibly non-harmless messages. It can be interesting to try to understand these messages and get down to their root cause. Here's one from early this morning (edited for readability):

monitor writes:
> Service: L_check_logfiles
> State: WARNING
> Output: WARN - WARNING - (30 warnings in check_logfiles.protocol-2015-02-06-03-35-35) - File=/var/log/ceph/ceph.log Message=2015-02-06 03:35:26.877633 osd.1 [2001:db8:625:ca1e:100::1021]:6800/219476 2257 : [WRN] slow request 30.185039 seconds old, received at 2015-02-06 03:34:56.692108: osd_op(client.1932427.0:27645811 rbd_data.1fd765491f48ea.00000000000000a9 [stat,set-alloc-hint object_size 8388608 write_size 8388608,write 52720648192] 5.4220493a ack+ondisk+write e9526) v4 currently no flag points reached ...

The "Output" line tells us that a write operation to osd.1 was stuck in a queue for 30+ seconds around 03:35. Why did that happen in the middle of the night, when utilization is low?

Graphite to the Rescue

Lately we have set up a CollectD/Graphite monitoring infrastructure for each site. It collects data in ten-second intervals. The ten-second samples are only retained for three hours, then aggregated to one-minute samples that are retained for 24 hours, and so on. Because this event happened at night, I missed the fine-grained samples, so all the graphs shown here have one-minute temporal resolution.

The event is visible from the "#Ceph" dashboard on our Graphite installation. I extracted a few graphs from it and focused them on the event in question.

Here is a graph that shows block I/O (actually just "I") operations per second of all OSD processes summed up for each OSD server:

These patterns (a few OSD servers reading heavily) indicate "scrubbing" activity (scrubbing checks existing data for correctness).

We can look at the block I/O read rates on the individual OSD disks on ceph21:

and see that /dev/sdc, /dev/sdd and /dev/sde were quite busy, while the other three OSD disks were mostly idle.

The busy devices correspond to osd.0, osd.1 and osd.2:

    $ ssh ceph21 'mount | egrep "/dev/sd[cde]"'
    /dev/sde1 on /var/lib/ceph/osd/ceph-0 type xfs (rw,noatime)
    /dev/sdc1 on /var/lib/ceph/osd/ceph-2 type xfs (rw,noatime)
    /dev/sdd1 on /var/lib/ceph/osd/ceph-1 type xfs (rw,noatime)

So we have strong confirmation that osd.1 (which was slow) was being scrubbed (heavily read from). If we look for "scrub" messages in the OSD log around that time, we see that there were in fact three deep-scrubs finishing between 03:38:11 and 03:38:40:

    $ ssh ceph21 'fgrep scrub &lt;(gzip -dc /var/log/ceph/ceph-osd.*.log.1.gz) | sort -n' | egrep ' 03:[345]'
    2015-02-06 03:38:11.058160 7f3376f7f700  0 log [INF] : 5.13a deep-scrub ok
    2015-02-06 03:38:36.608224 7ffe093c4700  0 log [INF] : 5.111 deep-scrub ok
    2015-02-06 03:38:40.711687 7f29ac56c700  0 log [INF] : 5.15a deep-scrub ok

The OSD logs unfortunately don't tell us when these scrubs were started, but from looking at the the graphs, in particular the second one, we can guess with high confidence that they all started between 03:21 and 03:22.

Now we can check which OSDs these PGs map to, and we find that, indeed, they respectively include OSDs 0, 1, and 2:

    $ ceph pg dump all | egrep '\&lt;(5\.1(3a|11|5a))\&gt;' | awk '{ print $1, $14 }'
    dumped all in format plain
    5.15a [0,56,34]
    5.13a [1,64,70]
    5.111 [2,66,86]


  • Deep-scrubbing has an impact on Ceph performance
  • It can happen that, on a given server, multiple OSDs are busy performing deep scrubs at the same time
  • When three deep-scrubs happen in parallel on the same server, the impact can be very visible and lead to >30s queues. This also seems to affect write OPs, not just reads.

I am somewhat surprised by this, as I would have thought that the impact is mostly due to "per-spindle" limitations (random read op/s), but maybe there's another bottleneck. One possibly interesting observation is that the three disks in question are connected to the same SAS host adapter, namely the one at PCI address 05:00.0 (there's a second host adapter on 83:00.0):

$ ssh ceph21 'ls -l /dev/disk/by-path/*-lun-0'
$ ssh 'cd /dev/disk/by-path && /bin/ls -l *-lun-0 | cut -c 38-'
 pci-0000:05:00.0-sas-0x4433221100000000-lun-0 -> ../../sdc
 pci-0000:05:00.0-sas-0x4433221101000000-lun-0 -> ../../sde
 pci-0000:05:00.0-sas-0x4433221102000000-lun-0 -> ../../sdd
 pci-0000:05:00.0-sas-0x4433221103000000-lun-0 -> ../../sdf
 pci-0000:83:00.0-sas-0x4433221100000000-lun-0 -> ../../sdg
 pci-0000:83:00.0-sas-0x4433221101000000-lun-0 -> ../../sdh
 pci-0000:83:00.0-sas-0x4433221102000000-lun-0 -> ../../sdi
 pci-0000:83:00.0-sas-0x4433221103000000-lun-0 -> ../../sdj
 pci-0000:83:00.0-sas-0x4433221104000000-lun-0 -> ../../sdk
 pci-0000:83:00.0-sas-0x4433221105000000-lun-0 -> ../../sdl
 pci-0000:83:00.0-sas-0x4433221106000000-lun-0 -> ../../sdm
 pci-0000:83:00.0-sas-0x4433221107000000-lun-0 -> ../../sdn

Maybe that SAS adapter was the bottleneck in this case.

Possible avenues for improvement

Better foreground/background I/O isolation

Ceph could do a (much) better job isolating actual user I/O from "background" I/O caused by tasks such as scrubbing or rebalancing. See Loïc Dachary's post on Lowering Ceph scrub I/O priority for something that can be configured on recent-enough versions of Ceph. (Thanks for the pointer, Harry!)

Better scrub scheduling

Ceph could do a (much) better job spreading out deep-scrubs over time. The effect described here is not an isolated occurrence - earlier I had observed periods of massive deep-scrubbing, with multi-day periods of no deep-scrubbing at all between them. For example, this is the block read-rate graph across our other Ceph cluster over the past 60 hours:

You see that all the deep-scrubbing is done across one 20-hour period. Crazy! This should be evenly spread out over a week (the global deep-scrub interval).

OSD/controller mapping on our hosts

We could do a better job distributing (consecutive) OSDs across controllers. While we're at it, we should also make sure that journals are distributed nicely across all SSDs, and that we never get confused by changing kernel device names for our SAS disks. And I want a pony.

Any other ideas?