When Google Reader met its untimely end, I looked around at the alternative RSS readers and settled on Feedly for its features and ease of use. However, I have some strong disagreements with the default Feedly UI, which I think has a terrible reading experience, so I set up the Chrome extension Stylish and got to skinning.
Over time, the Feedly team has updated their design some and I’ve had to modify and add to my styles and I expect that trend to continue, so think of this as a living document instead of a one time fix. I’ll leave version information at the bottom if you’re curious.
Recently on SourceAudio we decided to make supporting foreign languages a priority. We’d always supported html encoded foreign language characters but clients found that extremely clumsy and had no desire to learn that arcane syntax, for which I couldn’t blame them. The solution was to start supporting them properly, which meant switching out character encoding across all layers of the site. After some deliberation, we decided to go with UTF-8, since that would get us all the characters we needed and seemed to have the widest support.
If you’re not familiar with character encoding, Joel Spolsky gives a good overview here. Basically, we needed to support characters like õôóõêç and 测试 in addition to the traditional English characters.
With that decided, it was time to start working on the layers. First up, we needed our backend data to be stored in UTF-8 and that meant updating MySQL.
I’ve had several topics come up while working at SourceAudio that would be perfect for posting according to my goal of writing about topics that are tricky to Google. However, it’s hard to decide sometimes whether it’s better to share or if there’s a competitive advantage to keeping certain things hard to discover.
But what about a more complex topics, like how to generate iTunes compatible metadata in AIFF files or how to properly estimate the size of zip files when creating them on the fly? I loved figuring that stuff out and it’d be fun to write about but those topics could be of interest to SourceAudio competitors so do I have a responsibility to keep it a secret? Not that those things are impossible to discover if someone was interested but the time involved in doing the research, in poking around at files, in poring over documentation, in running test after test until I got it just right – all that really adds up. There’s value in that knowledge.
As someone who’s benefited heavily from others being willing to share their valuable knowledge, do I have an obligation to share when I figure something out? Or is the greater obligation to the company?
It’s the classic “Information wants to be free” problem and I don’t have the answer. My heart wants to set it loose but my head is a bit more cautious. I’m usually a head guy, which I guess explains why I haven’t posted in a couple years, but I don’t want be Smaug, sleeping forever on my piles of information and contributing nothing. How do you know when it’s time to open up the Lonely Mountain?
While tring to figure out why file uploads weren’t working in IE9 on SourceAudio, I discovered an interesting quirk: IE9’s user agent as reported by navigator.userAgent isn’t necessarily the same as the user agent that it sends in for http requests.
Apparently this is intended and understood behavior but it was the first I’d heard of it.
To summarize, MS found that as programs and add-ons added “feature tokens” to your user agent string, the length of the string would become so long that some servers would throw a fit. To prevent the issue, IE9 stopped adding these feature tokens when they send the user agent to the server, so instead of sending
Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET CLR 1.1.4322; .NET4.0C; .NET4.0E)
You just send in
Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)
You should really watch Richard Dawkins: Faith School Menace:
I found the part with the teather and students in the Muslim faith school to be particularly frustrating. How can you reasonably expect pupils to make an educated decision about which viewpoint is correct when their instructor clearly has her own, anti-science belief and can’t even answer common questions about evolutionary theory? All the while you’re teaching them in another class that all the information from the Koran is absolutely correct. When conflicts arise, are they going to follow the complex theory that was just taught half-assed by someone who doesn’t believe a word of it or are they going to go with the view that was taught with much conviction and fervor in a different class (which, btw, you’ll go to hell if you don’t choose).
Is that really an atmosphere of ideological equality? Nevermind, as he talks about later, that kids (and even adults) are more inclined to believe purpose driven explanations anyway because it’s the way we’re wired. If you want to teach evolution and you want children to really get it, you need someone teaching it who really understands it themselves and wants to convince their students. An Islamic shill is not going to do that. They can claim they’re giving children choice but without proper education on both viewpoints and without giving them proper tools with which to make choices (an education founded in critical thinking instead of indoctrination), it’s not really a choice at all.
And then the guy invokes the “just a theory” and you know that was dropped in science class without any explanation of the difference in definitions between scientific and common usage of the term.
PS – Thanks for not letting US viewers watch it on your website, Channel 4. I was perfectly happy to support you and contribute to your advertising revenue by watching your version but I guess that’s not happening.
Google today announced Google TV, which, while doing a number of other interesting things, allows you to easily run Android apps on your TV. There are plenty of games in the Android Market. If you had a good controller, I’m not sure how that experience would be very different from playing something on a traditional gaming console.
Sure, there are other systems you’d probably want in place – friends, achievements, etc. – but you already have your contacts built into the phone and there are third party achievement systems even now (though something a little more ubiquitous would be nice). Graphics are going to take a hit but Google could easily remedy that by throwing in some specialized hardware and you could certainly play less graphically intensive games in the meantime.
There are obviously a number of hurdles still but they’re talking about getting a device with a content delivery network and pre-built library of games and attaching it your TV. That’s going a lot of the way. And if you build a game for that, it’s also going to show up on one of the biggest smartphone install bases – that’s not a bad deal.
SourceAudio, like a lot of ajax heavy applications, uses the hash to store state information. For example, when you search, you might end up on a url like http://www.sourceaudio.com/#explorer?s=search+terms&pg=1
The “page” is the “explorer” and the parameters are after the question mark. There are a number of ways you can format your hash but using the standard url format has been a pretty good solution for us. At least until today.
I wrote a little program to calculate the “winning” boards of the Knight’s Tour problem after my grandfather brought it up and noticed some C# array performance stats while I was at it. I ended up using lots of .Clone() operations on the arrays and just some basic index accessing.
On the first try, I wrote it with a multidimensional int[,] array and it took 48.5s to run 10M iterations. After reading that you should really flatten multidimensional arrays, and changing the code to just use int, it only took 10.2s to run otherwise identical code. That’s pretty huge.
Out of curiosity, I switched it again to use ArrayList objects and knocked it even further down to 6.4s.
So, multidimensional arrays, slow! ArrayLists, fast! Is anything else better?
I have a site where there’s a cache of a bunch of tracks (music site) and there are instances when a user does something and I need to rebuild that cache. The rebuilding takes a good five or ten seconds and will only become slower as we get more tracks so having the user wait while I do that is no longer feasible. I have a script that does a periodic rebuild (every fifteen min) but I’ve found that users don’t understand that their changes were accepted when they don’t show up for several minutes. It has to rebuild on demand but asynchronously. That way you get their changes applied pretty quickly but don’t have to wait on it.
I tried googling for asynchronous php solutions but couldn’t get anything to actually work. Brent accepted this answer but I had no luck with that. It would make the call but it wouldn’t do it asynchronously. I always had to wait for the thing to finish.
Some people suggested using the command line to run php and make the call that way but I couldn’t do that because of some architectural crap. I came up with this beauty though: exec('wget -O /dev/null -o /dev/null -qb --no-cache --post-data=foo=bar http://theurl.com/whatever.php');
and it works perfectly. You can call a script on any server (not just your own like with php) and it runs in the background and discards the output quietly. Just one line and forget it.
Naturally, this only works on Linux servers with wget installed. Sorry windows folks