The Roberto Selbach Chronicles

About |  Blog |  Archive

Category: Tech

Caddy and the Importance of Licenses

I haven’t commented on the recent brouhaha caused by Caddy‘s decision to offer commercial licences, so I’ll do it briefly here before moving to the important part.

I am fine with it. I don’t love it, but it’s fine. The Caddy authors have every right to try to profit from their work. Best of luck to them, they deserve it. Do I think they mangled the announcement? Yes. Do I think the amount of vitriol out there was justified? No. But again, it’s fine.

But I want to talk about something else and I’ll use this episode to illustrate it. Matt Holt published his thoughts on the experience in The Realities of Being a FOSS Maintainer. It’s a nice read, but there is something there that I think we should not overlook.

Midway through Matt’s post, he clarifies the situation with their build server, that they removed[note]Most likely made private[/note] from Github.

To clarify, the Caddy build server was once open source, but we closed it up in the interest of focusing the technical attention of our community and our limited development resources (mostly time) on Caddy itself. The build server is not generalizable, and only exists to serve the Caddy project. As such, we’re taking it under our wings to develop and maintain it as needed. If you find some old source code still online, be aware that no license file was added to the code, and we have not granted others any license to use it.

This highlights the importance of checking the license of “FOSS” software. Being open source means something. It doesn’t just mean “hosted on Github.” Just because you find a piece of code on Github, it doesn’t mean you can freely use it. It sucks, but as the above paragraph shows, it matters.

What Matt is saying here is that although his build server was open source, it no longer is and if you have the code, you were never granted any license to use it. This cannot be, of course. Either it was never open source to begin with, or you were granted a license to use it. Which one is it?

Since Matt makes it clear that “no license file was added to the code,” that means it’s the former: it was never open source, no matter what he says now. Whether intentionally or not, people were misled into thinking it was.

People would find the code on Github and assume it was open source. That’s why checking the license is important. A project without a license is not open source and you are at risk.

I’ve seen small projects on Github before with no license information at all. It always made me uncomfortable. Now I see I was right.

I want to make clear this is not about Caddy or Matt. Again, I’m fine with their decision. My points are general:

  • Properly license your open source software.
  • Check the license of software you use.

If you don’t, this will get back to bite you.

Cedilha no Fedora 25

Quem utiliza teclado US Internacional para escrever no Linux já deve ter dado de cara com o fato de que na maioria das distribuições, a combinação ‘+c gera um “ć” em vez de um “ç”. Resolver isso no Fedora 25 é fácil, mas não evidente.

tl;dr – eu criei este script que faz todos os passos abaixo automaticamente. Basta rodar isso:

curl https://raw.githubusercontent.com/robteix/c-cedilla-fedora/master/c-cedilla-fedora | bash

Se você preferir não executar o script, continue lendo.

Primeiro, vamos criar um novo mapa de teclado para seu usuário. Rode o comando abaixo:

sed -e 's,\xc4\x86,\xc3\x87,g' \
    -e 's,\xc4\x87,\xc3\xa7,g' \
    < /usr/share/X11/locale/en_US.UTF-8/Compose > ~/.XCompose

Isso copia o arquivo de mapeamento de teclas do Fedora para o diretório $HOME do usuário, substituindo o “Ć” por um “Ç”.

Agora vamos configurar o GNOME para que ele não controle a configuração do teclado, para que possamos usar nossa própria:

gsettings set org.gnome.settings-daemon.plugins.keyboard active false

Para selecionar o input method apropriado, o Fedora fornece um programinha chamado im-chooser que não é instalado por padrão. Para instalá-lo:

sudo dnf install im-chooser

Por fim, executamos o im-chooser e escolhemos “Use X Compose table”:

Clique em “Log out” para aplicar as modificações e a partir de agora deve ser possível gerar o c-cedilha com a combinação ‘+c.

Why do Salespeople Believe in Magic?

File this one under techies complaining about non-techies. Over the years, I have noticed a pattern with salespeople, they have a firm belief in wishful thinking. They honestly believe that wishing something to be true will magically make it so.

Wishful_Thinking

The specific pattern I noticed many a time goes something like this.

The customer wants something done and goes through their account manager to request that. The account manager — a fancy name for salesperson — commits to a date without talking to the developer first. They then go to the developer and tell her something to the effect of “yeah, I’m going to need that by Friday morning.” The exasperated developer explains that this is not feasible and the account manager responds by simply repeating that they will need it by Friday. They usually leave at this point satisfied that everything is fine.

Come Friday, the account manager is then horrified to discover that the feature is not ready. “But we made a commitment with the customer,” they’ll say, emphasizing the “we” that never was.

This has happened some many times in my career that I should no longer be surprise and yet I still do. Every time.

Of course, what they are really doing is trying to put pressure on the developers so that they will hurry up and deliver on the desired timeframe. And to be fair, it can sometimes work, but if the developer tells them in no uncertain terms that the deadline is not feasible, then the salesperson is taking the risk by herself.

By committing to a date with the customer before talking to the developer, the salesperson has already taken a big risk. They will then try to spread that risk by sharing it with the developer. Since the developer never had a chance to agree with that in the first place, it is only fair that she should be able to refuse to take the risk if she doesn’t believe it is worth it. Why should she?

And yet, we developers often accept the risk by being passive and simply accepting it. This problem can be exacerbated by managers who are also passive. Many years ago I worked at a company where the engineering power that be were submissive to the sales department, which led to some of the worst experiences in my life as a developer, including the Project From Hell.

At the time, I led the engineering services group, so whenever a new project came about that required software development, it had to go through me for analysis. Another group leader analysed the infrastructure projects. Then one day this large project showed up on my desk. I looked over it along with the infrastructure guy and we agreed that it was a monster of a project, including developing a huge distributed nationwide (Brazil) infrastructure, a huge system developed from scratch, and a lot of technology transfer and end-user training.

The project had some timeframes attached to it and although they were tight, they were not what immediately caught out eyes. We saw the pricing they were offering the customer and it was clear to us that it was low. We raised the issue but were told not to worry about it. There are valid business reasons to do projects at a loss sometimes, so it was okay, except the exact wording to us was, “please limit yourselves to your little expertise sphere.” Ok then.

We talked to our teams and we committed to the dates defined in the project documentation. Again, it was a little tight but feasible: we would have many months to get things ready for initial deployment.

About a week after we approved the statement of work, I received a call from the customer. At this point, I had not yet engaged the customer at all, so they had gotten my phone number from their sales rep. The customer was possessed. This person I had never talked to before was shouting at me on the phone that we were late. It took me a while to call him down and understand what the issue was.

As it turns out, the sales team, in an effort to get the customer’s signatures before the end of the quarter, changed the statement of work after we had gone through it, moving the dates to right that very moment. We had not even started working on the project yet and the customer was expecting it to be ready right then. We were late before we even started.

Many stressful meetings later we managed to agree on some new dates, but they were much tighter than the ones originally in the statement of work and would required us to outsource parts of the project to a contractor to help speed up things. Incidentally, what we paid the contractor for only a part of the project was more than what our company made on the project. All because the sales team wanted to make sure they got their commission in that quarter.

The people responsible would eventually be let go of the company, in great part due to this, but that did not prevent the company from losing at least an order of magnitude more than what it made from that project.

And still, I continue to see salespeople ignoring the developers and then trying to share the fallout. Developers need to stand firmly by their professional evaluations of deadlines and technical feasibilities.

Of course, if you turn out to be wrong, all of this is moot.

Things I wish I knew one year ago

About an year ago, I started a new project in a stealth startup. Aside from the fact that it would provide currency that could then be exchanged for goods and services, including food for me and my family, one of the biggest reasons I took the job was that I was going to get to learn a lot of new things.

Now, over a year later I can say that learn I did. And I caught myself thinking of what I wish I knew back in the beginning. The following list is probably not exaustive but it’s a good example of what I’ve learned. These are mostly things I’ve dealt with on rewrites, redesigns, or are things that I still struggle with.

JavaScript is magical and that’s not good

I already had a general idea of the issues with JavaScript, but I really failed to grasp how magical JavaScript is. And when I say magical, I don’t mean the good kind that conjures fluffy rabbits inside hats. I mean evil dark magic that would make Voldermort cry. I mean magic in the sense of a language that never seems to behave just the way you think it should.

I wish I knew back then how much work I would need to spend to do what seemed at first to be basic, simple stuff. I often caught myself fighting JavaScript instead of working with it.

AngularJS is awesome but the web works on jQuery

I think AngularJS is fantastic. Really I do. I am not at all sorry for having picked it for the project. I think it’s well-designed and logical and yes, it is magical sometimes, but often in a good way.

But…

I wish I knew back then how much work I would have to do making AngularJS and jQuery work together, and you have to make them work together because honestly, unless you want to reinvent the wheel for every little thing, you will end up using jQuery components. We will revisit that later.

So yeah, AngularJS is great but for it to work properly, you have to do things the Angular way and that means turning a lot of jQuery components into AngularJS directives. Once you get the hang of it, it gets much easier and faster but there’s a bit of a learning curve there.

asynchronously have You think undefined

You have to think asynchronously to work with JavaScript. For a while, it was painful and it surely added to my perception of JavaScript being magical. JavaScript is entirely asynchronous and works like a turn-based game. This can be annoying at first and I caught myself working around this in all kinds of wrong ways. I wish I knew back then what I know now so I’d embrace the async’ness from the get-go. Things would be so much easier.

Large single-page web applications rock but…

Picking AngularJS as my framework of choice, I designed the web client for the project to be almost purely implemented inside the browser in JavaScript as a single-page app. This has some very good consequences, chief among them responsiveness. The application feels fast. Sure it has to download a lot of JavaScript up front, but (1) this is not nearly as bad as I expected it to be and (2) with caching this is done once and that’s it. Switching from a “page” to the other is essentially instantaneous as it’s all pre-loaded AJAX parts of the page. The server actually has to do very little.

But now, if I could start all over again, I would not do it like one big single-page app. It’s just too much JavaScript code and I’ve already established how I feel about that. I never have the feel that things are stable, even though there are integration tests passing all the time. I can’t explain it but it all feels like a big house of cards, like it’s all going to come tumbling down all of a sudden. I simply don’t trust the code. I would much rather use JavaScript sparingly, to do some things but surely not all. I have been fixing this slowly by isolating some parts of the application as separate, but to do it for everything would require a rewrite that I don’t have the time for right now.

Trust the Go standard lib

As an alternative title, “don’t go too fancy with Go.” It’s not that I didn’t trust it, but I sure did try things that honestly didn’t make sense. I am looking right at you, Martini, I had to do a lot of work to get rid of you.

As it turns out, the standard library is really good. And for the things that are not in the standard lib, there are packages that are really well done and work with the standard lib, not against it. I love you, Gorilla Toolkit.

Never assume jQuery components will do what you need

The current state of JavaScript is such that there is literally—in the figurative sense—an infinite number of components out there ready for you to use. As far as I can tell, anything I could ever possibly conceive of has already done and hosted on Github. There are time pickers, calendars, and even a business hours widget display ready for you.

And yet.

I am yet to find one single component that is flexible enough to work for my needs without extensive modifications. And it’s not like I have especially unique needs. I am talking about things that are to the best of my knowledge pretty basic. But no, they never do that one tiny thing you need them to and I spend a lot of time implementing features I need.

Bootstrap responsiveness

For a long time I misunderstood how Bootstrap deals with responsive design. Trusting Bootstrap’s system is a must. It works very well. I wish I understood it better back then. It would have saved me a lot of time. And related:

Don’t use an existing Bootstrap theme

It’s just not worth it. Back then, we didn’t have a designer and we wanted to have our minimum viable product as quickly as possible so we went after something ready. We chose a premium theme that is very popular. It was good looking and seemed really nice. Boy do I wish I had a time machine now.

We are now stuck with it. Switching to something else is something we will have to do eventually and let me tell you: it will be painful. These themes change everything and not in a way that we could simply switch themes painlessly. It permeates everything.

Think of touchscreens from the beginning

Oh boy. You do a lot of nice stuff and then one day you try it on a tablet and nothing works just as your co-founder tells you that your first customer will use the app exclusively on tablets. Then you start panicking and sobbing and crying and running for help. You then find out about the hack that is jQueryUI Touch Punch and although it fixes some of the problems it creates new ones and then there’s jQuery Mobile which is OMG fantastic except lo and behold it doesn’t play well together with AngularJS. And then you find some projects on Github trying to get jQuery Mobile to work alongside with AngularJS but they’re no longer maintained and won’t really work that well and…

Well, it would be best to think about touch support from the get go.

Stop worrying about the size of your JavaScript

Or actually, do worry, but for the right reasons. One of the things I learned is that the web is full of outdated and/or otherwise uninformed advice.

Stop laughing.

Yes, it’s obvious, but still. With little experience with web frontend programming, I used to worry about the size of my JavaScript code. And the web doesn’t really help. Search a bit and you will find people telling you how horrendous it would be to download 100Kb of JavaScript and how the only way to save your customers is to access files from popular CDNs.

It turns out that’s kind of like cargo cult programming. It’s just a bunch of people repeating something they heard somewhere. Things change. We measured them.

Concatenating and minifying our files and serving them gzip’d and with aggressive caching is actually faster. Even concatenation is about to become irrelevant now that SPDY is becoming more common (IE11 now supports SPDY3 and the upcoming versions of Safari for OSX and iOS will as well.)

The first access to our app downloads almost 1MB of JavaScript code and CSS files. It sounds horrible but it’s really fast, even on bad 3G with with defer and async. Speaking of which.

Most of the advice repeated as a mantra on the web is from before browser support for <script defer> and <script async> was widespread.


I think that’s about it. Now all I need is a time machine.

Join us on App.Net

I liked the idea behind App.Net (or ADN for the initiated) from the start; I’ve happily signed up during the initial funding effort and before it even existed. It is quite like Twitter, although it does have some pretty interesting API advantages that allow clients to do things that are not possible in Twitter such as creating private chat rooms (with Patter.) I found a text by Matt Gemmell, App.Net for conversations, that sums it up nicely:

The interesting part, though, is what you won’t be used to from Twitter. There are no ads, anywhere. Because it’s a paid service, there’s no spam at all; I’ve certainly never seen any. There’s an active and happy developer community, which ADN actually financially rewards. There’s a rich, modern, relentlessly improved API. And again because it’s a paid service, there’s a commensurately (and vanishingly) low number of Bieber fans, teenagers, illiterates, and sociopaths.

But the real difference I notice is in the conversations. On Twitter, the back-and-forth tends to be relatively brief, not only in terms of the 140-character limit, but also the number of replies. There’s a certain fire-and-forget sensibility to Twitter; it’s a noticeboard rather than a chatroom. Then there’s the keyword-spam (woe betide the person who mentions iPads, or MacBooks, or broadband, or just about anything). Oh, and let’s not forget the fact that any malcontent with internet access can create an account (or two, or ten) in seconds. Not a happy mixture.

I’d add that there seems to be less of a popular clique on ADN. Popular users seem to be much more engaging with “regular people” than on Twitter. And there’s the developers… although most of the rush is now behind us, it was fun to follow the developers working on ADN clients. It was a very collaborative effort, with alpha builds floating around and discussions about whether this or that should be done in a certain way.

As for the developers of ADN proper, well, you can try asking ADN CEO and Founder Dalton something to see if he’ll answer you in about 30 seconds. He actually does. 🙂

It all feels like a big community where everyone feels a bit like they own the place as well and want it to thrive. Again I think Matt is on the money on why this is so:

We value what we pay for. We not only pay for things which we deem to be of value, but we also retrospectively assign and justify value based on what we’ve paid. Any consumer is familiar with the simple psychology of cost equating as much to value after the transaction as value does to cost beforehand (likely moreso, from my own experience). At its core, I don’t think that the reason for the noticeably different, warmer, more discursive “feel” of ADN is any more complicated than that.

I personally love the service and I think you should consider it too. There is a free tier account that allows you to follow up to 40 people for free, as long as you’re invited by a current user. If you’re interested, I have a few invites.

Feel free to comment on this post by using this Google+ thread or also by talking to me on, where else, ADN, where I’m @robteix. And of course Twitter isn’t going anywhere and I’m there too.

Rolling out your own Fusion Drive with the recovery partition

disk utility showing Fusion Drive

My Macbook Pro has two disks, an HDD and an SSD, each of 240GB or so. With the details of Apple’s Fusion Drive coming out I decided to do what any reasonable geek would do to their production computer: I’ve decided to implement my own untested, highly experimental and barely understood Fusion Drive.

One of the things that initially put me off doing this was that according to the 3,471,918 tutorials that have popped up in the last 10 minutes would cause me to lose my Mountain Lion recovery partition because these partitions are not supported in a Fusion drive. Turns out this is not exactly true.

Fusion Drive is just a marketing term for a what essentially is a CoreStorage logical volume spanning an SSD and an HDD. And although you cannot have the recovery partition inside a CS logical volume, it doesn’t mean you can’t have both a recovery partition and a Fusion Drive at the same time. It’s all in the diskutil man page, by the way:

Create a CoreStorage logical volume group. The disks specified will become the (initial) set of physical volumes; more than one may be specified. You can specify partitions (which will be re-typed to be Apple_CoreStorage) or whole-disks (which will be partitioned as GPT and will contain an Apple_CoreStorage partition). The resulting LVG UUID can then be used with createVolume below. All existing data on the drive(s) will be lost. Ownership of the affected disk is required.

What matters is what’s in bold above: we’re not limited to using whole disks. So here’s what I did.

I rebooted my system and held the option key so I could select my recovery partition as the start up disk. Once the OSX recovery started up, I launched a terminal to do the dirty work.

diskutil list

From this I noted two things: (a) the main SSD partition (the one holding my OSX and that sited by my recovery partition) and (b) the disk name of my HDD. They were respectively disk0s2 and disk1 in my case, but they’ll very likely be different for you. Then the magic begins.

diskutil cs create "Fusion Drive" disk0s2 disk1

(For crying out loud, you need to change disk0s2 and disk1 for whatever makes sense on your system!)

That created the coreStorage logical volume. Then I listed it all again to note what the new logical volume UUID was.

diskutil list

The UUID is a long number identifier like F47AC10B-58CC-4372-A567-0E02B2C3D479. You’ll need that one next to actually create the volume where you’ll be installing your system.

diskutil coreStorage createVolume F47AC10B-58CC-4372-A567-0E02B2C3D479 jhfs+ "Macbook FD" 100%

The command above will create a volume named “Macbook FD” using 100% of the logical volume we had created earlier.

I then restored my Time Machine backup and that’s it.

Update: Note that after this process, the Recovery partition will still be present and things that require it (such as Find My Mac) will work fine. Some people correctly pointed out, however, that you can no longer boot from the recovery partition by using the menu from holding ⌥ (option) during boot. I’m not sure why that is, but fear not, it will still boot normally from pressing ⌘R (command + R).

How I accidentally because a domain squatter

A couple of days ago, I was listening to one of my favourite podcasts, The Frequency, when one of the hosts, Dan Benjamin, thought of a cool domain name, ohitson.com and checked to see if it was available. Turns out it was and he said he was registering right there and then. Now, two things: (a) I was listening to a recorded podcast, not live; and (b) I thought to myself, damn it! it is a cool domain name.

The next day, I launched Hover and checked the domain name and to my surprise it was available. I simply thought either Dan had given up on it or, most likely, I had misunderstood the domain name he was talking about and fortunately that had made me thing of this cool domain name. I even checked Google to make sure “Oh, it’s on” was really written like that 😛

Obviously I went ahead and registered the name. After that, I listed to that day’s The Frequency and heard Dan tell Haddie something to the effect of “oh, I forgot to register that domain yesterday!” That’s when I thought, oh-oh, maybe I had heard the correct domain name.

As it turns out, I was just listening to today’s episode and guess what? Dan mentions that someone registered it due to his mention on the show (which is technically true.)

But I am a nice guy. I offered to transfer the domain to Dan for free just a few minutes ago. Not sure he’ll see my posts to app.net or Twitter. If not, I’ll try again a few times. I really don’t have any intention to keep this domain name as long as he still wants it. Sounds unfair.

Update 7 Nov 2012: can you believe he actually accepted my offer? What a douche! Just kidding, it was the Fair Thing to Do™ and I’m happy to say the domain has been transferred to His Benjaminship already.

In which Rob gets app.net

I got myself an app.net account (@robteix) a couple of days ago. [Cue the ihave50dollars.com jokes.] Still haven’t really used it a lot but am starting to enjoy the discussions about the API development.

I joined mostly to support the idea—which I must admit is pretty insane. But what would the world be without insane ideas? ADN will never be Twitter or Facebook, but maybe it doesn’t need to be. Maybe it really only needs to be a same environment for people like me. I like it.

They do need a better name though.

  • Posted using BlogPress from my iPhone

My need for anonymity

Much has been said about the pros and cons of anonymity lately, prompted by Google+ TOS which require the use of one’s real name. No pseudonyms allowed, except apparently if you call yourself Lady Gaga or 50 Cent.

I have seen many kinds of arguments both for and against the use of aliases and I will not repeat them here. There is however one use of aliases that I haven’t seen stated anywhere and that coincidentally affects me personally. Perhaps this is so because the problem I am about to present is not so common after all. Or perhaps it is common but people decide not to talk about it. I have no way of knowing.

Anonymity is a vital necessity to people with a certain kind of disability, a mental disorder. I am such a person. As some of my friends know and others mock, I suffer from a mental condition called social phobia, also known as social anxiety. I take medications that help me overcome some of the most serious effects and that allow me to do things like write about it on this very blog.

Social anxiety manifests itself in varying degrees in all kinds of social interactions. And the levels of manifestations are not what you might expect. I regularly make presentations without a second thought. I’ve given talks to hundreds of people. And yet, ordering a pizza over the phone is terrifying experience to me. No matter how many times I’ve done it, I still have to “prepare” myself every time. I rehearse, play several unlikely scenarios in my head until I finally get the courage to dial the number and talk to the person on the other side. One characteristic of this anxiety disorder is that rationally I know that there is nothing wrong; there is no risk in calling the pizza place. But the brain acts as if there were. But I digress.

I love coding. I have been doing it since I was a kid and it’s the best thing I know how to do. And then there is open source. Open source projects should be the perfect venue for me to have fun. Except I am scared stiff by the idea that someone might laugh at the code. It came to a point where it is impossible for me to contribute. Then I’ve come up with a solution: an alias. For the past several years I’ve lived two different lives online: one as myself and another as an alias. I keep them strictly separate.

Using the alias, I actively contribute to several different projects. And I enjoy it all. And it would be impossible for me to do that using my own name. My pseudonym allows me to work around my condition. It allows me to live my life.

I understand the rationale behind the requirement for real names on Google+. But I also know that the requirement makes it impossible for people like me to be really free on the Internet. So far, Google hasn’t figured out my alias. Hopefully it never will.

(Photo by Abhishek Singh)