## Statistical Measures

In the stats book that I used at college, A First Course in Probability (sixth ed) by Sheldon Ross, I found two problems that seem paradoxical when juxtaposed. Can you explain the opposite results?

Ch 2 Axioms of Probability, Self-Test Exercise #15.

Show that if $P(A_i) = 1$ for all $i\geq1$, then $P\left(\bigcap\limits_{i=1}^{\infty} A_i\right) = 1$.

Ch 5 Continuous Random Variables, Theoretical Exercise #6.

Define a collection of events $E_a, 0 < a < 1$, having the property that $P(E_a) = 1$ for all $a$, but $P\left(\bigcap\limits_{a} E_a\right) = 0$.
Hint: Let random variable $X$ be uniform over $(0,1)$ and define $E_a$ in terms of $X$.

## Acer Swift 1

I decided to upgrade my laptop, and chose to get the Acer Swift 1 SF113-31-P6XP (the rose gold color). The Acer website indicated that this model would have a keyboard backlight, but it does not. It has 3 stuck pixels and a weird bright spot in the display that looks like a reflection, which I only notice when showing bright colors. Since I changed all my settings over to dark mode, I don’t notice these issues at all.

The laptop itself is slightly underpowered, so occasionally an application will behave as if it got paused for a second. Because my main use case for this device is just surfing the web in bed, I don’t mind that behavior at all, and have come to expect it even on workhorse machines. I blame javascript, plugins, and browser architecture generally. Bonus points: the device has no fan and runs completely silent. I find the trade-off worth it. The N4200 supports bursting up to 2.5GHz and Linux makes good use of that ability. I had no problem streaming videos and playing them full-screen.

My biggest complaint comes from the trackpad, which kept freezing. So, I took some steps to remedy that problem (in Ubuntu).

1. apt install xserver-xorg-input-synaptics, for some reason this does not install with xserver-xorg-input-all. It’s presence opens up a bunch of configuration options regarding click behavior, scrolling, palm detection, etc.
2. Create a script that will cycle the touchpad when it freezes and create global keyboard shortcut to run it. If the touchpad freezes, at least you have a button to get it back.
#!/bin/bash   declare -i ID ID=xinput list | grep -Eio '(touchpad|glidepoint)\s*id\=[0-9]{1,2}' | grep -Eo '[0-9]{1,2}'   xinput disable $ID sleep 0.1 xinput enable$ID

I spent a day using this setup and must have hit the cycle button at least 50 times. Though it was quick, it got really annoying.

3. One time the touchpad didn’t respond after resuming from sleep. So I dug deeper to see if I could virtually unplug and replug it.

If the touchpad doesn’t come back after using the above script, then you can cycle the responsible kernel module.

sudo modprobe -r hid_multitouch sudo modprobe hid_multitouch
4. After some more research, I learned that other Acer models had similar issues, but they could be fixed with a change to the bios settings. During bootup press F2 to access the bios, then switch Main > Touchpad from Advanced to Basic.

For the past five days, I have not had to cycle the touchpad (step 2) since changing the bios flag (step 4).

## Debunking the Intrinsic Value Argument

I have to admit to having updated my mind about the “intrinsic value” argument that many people cite as a justification for treating gold as a money (vs paper currency). I’ve previously attempted to explain away this argument as a side-effect of other properties[Gold is Money] or to dismiss it as an unrelated feature[The Commodity Money Myth]. Now I have some good reasons to believe that the entire argument is unsound.

First, a conversation that I had with a fellow camper at the Jackalope festival.

Person: Gold is money and Bitcoin only a currency.
Me: Ok, what’s the difference?
Person: Well, money can operate as a store of value.
Me: Interesting, how do you store something subjective?
Person: *mumble something about intrinsic value that I find unconvincing and irrelevant*

If you take the Subjective theory of value seriously, then it’s obvious that “intrinsic value” is an illusion. Gold has held its value for a long time, sure, but that’s because people, individuals, continue to have a high subjective value for that material. I don’t see a big problem expecting similar valuations in the future, but that position says much more about human preferences than it does about a shiny yellowish metal.

Next, a dismantling of the argument’s structure.

To say that gold makes a good money because it has some other uses (jewelry for the Ancients, electronics also for modern society) is to cite competing non-monetary uses! Do you really find it convincing to hear someone say “Y is a good X because its useful for non-X” or “Let’s trade with this substance instead of putting it to these other uses”? Consider some of the implications:

• If the other uses become more highly valued than facilitation of trade, your commodity money will disappear from circulation.
• Those other uses have to compete with use as money, making them have a higher price than they otherwise would.

Wouldn’t the world be better off to use that gold industrially or culturally rather than sequester it away in a vault? Cryptocoins can help with that liberation, for they have no competing uses. By explicit design, their highest value use is to facilitate trade.

Furthermore, under the theory of intrinsic value: the more competing uses a substance has, the better a money it becomes. Ridiculous! The very structure of the intrinsic value argument undermines what it attempts to buttress.

## Cognitive Bias in Artificial Intelligence

I believe that artificial intelligence will suffer from cognitive biases, just as humans do. They might be altogether different kinds of bias, I won’t speculate about the details. I came to this conclusion by reading “Thinking Fast and Slow” by psychologist Daniel Kahneman, which proposes the brain has two modes of analysis: a “snap judgement” or “first impression” system and a more methodical or calculating system. Often we engage the quick system out of computational laziness. Why wouldn’t a machine do the same?

Researchers in machine learning already take careful steps to avoid many biases: data collection bias, overfitting, initial connection bias in the neural net, etc. But, I haven’t yet heard of any addressing computation biases in the resulting neural net. I think precursors of biased behavior have been observed already, but was explained away as being present in the input data or as resulting from the reward function during training, or some other statistical inadequacy.

Let me give a simplified example (and admittedly poor example for my argument) of cognitive bias present in humans and reflect on why it would be difficult to filter out such bias in a machine learning algorithm.

In the Muller-Lyer Illusion, which consists of a pair of arrows with fins pointing away or toward the center. Each shaft has the same length, but one appears longer. As a human familiar with this illusion, I will report that the shafts have equal length. Yet, subjectively, I do indeed perceive them as being different. My familiarity with the illusion allows me to report accurate information, lying about my subjective experience.

Now suppose that we train a neural net to gauge linear distances. And we have a way of asking it whether the lines in the Muller-Lyer diagram have the same length. What will it report? Well that depends, being a machine it might have a better mechanism for measuring lines directly in pixels and thus be immune to the extraneous information presented by the fins on the ends of those lines. But, humans ought to have that functionality as well on the cellular sensory level, yet we don’t. But, if the Muller-Lyer Illusion doesn’t fool the neural net, does a different picture confuse it? So far, yes, such things happen: the ML categorizes incorrectly when a human wouldn’t. We tend to interpret this as a one-off “mistake” rather than a “bias”. But the researchers succumb to evidence bias: they have only one example of incorrect categorization and they don’t perform a follow-up investigation into whether that example represents a whole class, demonstrating a cognitive bias in the neural net.

Now suppose the researcher do perform the diligence necessary and discover a cognitive bias. They generate new examples and retrain the net. Now it performs correct categorization for those examples. Have they really removed the bias at a fundamental level? or does the net now have a corrective layer, like I do? I presume the answer here depends on the computation capacity of the net: simple nets will have been retrained, while more complex ones might only have trained a fixer circuit, which identifies the image as being a specific kind of illusion. Thus, the more capable the neural net, the more likely it starts looking like a human: with a first impression followed by a second guess.

How ought research approach this problem? Should the biases get identified one at a time and subsequently be removed with additional training? Due to the large number of biases (c.f. all of Less Wrong, or  this list of cognitive biases), I think that approach doesn’t scale well. Especially considering that biases result from cognitive architecture and trained neural nets differ from human brains, I think the biases in ML will be new to us. Those should be exciting discoveries! I propose training with multiple adversarial nets, each trying to confuse the categorizer. This approach contains architectural symmetry, so it probably won’t work for biases that result from differences in wet-ware vs. hard-ware computation. Those should be even more interesting discoveries!

Humans clearly have a large reliance on contextual clues and the whole point of investing in ML is to capture and replicate that level of cognition. But contextual clues can mislead as easily as they help. So ML ought to have cognitive bias, as humans do, but very likely different kinds. Efforts to train out that bias might even be met with repulsion. Humans feel comfort with the familiar, so cognition which has our biases removed should feel viscerally unwelcome. For example, robots which lack biases associated with empathy will be perceived as sociopathic.

## Your vote doesn’t count, but it does matter.

Under their current political system, the American chattel have a “civic duty” to voice their opinion about who they want as a representative. Every 4 years potential presidents spend billions on campaigns to excite the plebeians to “get out and vote!” to “make their voice heard!”. That money would certainly have more impact if spent on the actual causes that Team Red and Team Blue claim to care about. Rather than offer direct assistance, both parties choose instead to promulgate the most basic falsehood of possible: that your vote counts in the national election for president. Nothing excites people more than sports that matter least.

Let’s count the ways that the system ensures your vote does not count.

First, gerrymandered districts ensure predictable voting outcomes. Politicians regularly carve up their constituency in ways designed to support the current power balance, usually to protect the incumbent. From the national perspective, these districts make predictable state outcomes, whether Red or Blue.

Second, either others outnumber your vote when you hold the minority opinion or you vote with the tide. “In either case, your vote does not decide the outcome. In all of American history, a single vote has never determined the outcome of a presidential election”[Reason, 2012].

Third, the Electoral College can ignore the popular vote. “There is no national election for president, only separate state elections. For a candidate to become president, he or she must win enough state elections to garner a majority of electoral votes.”[Walbert, 2004]. Electoral delegates have no obligation to vote the same way as the popular vote of the state they represent, but they usually remain faithful.

Fourth, in the event that a state doesn’t have a clear position, the Supreme Court might decide. In 2000, the state of Florida did not have a clear preference, even after multiple recounts. When hearing the lawsuit over whether the recounts should continue, the Supreme Court accepted the de-facto power to decide the outcome of the election.

Fifth, Congress can decide. According to the rules of the Electoral College, “If no candidate wins a majority of the electoral votes or if the top two candidates are tied, the House of Representatives selects a president from among the five candidates with the most votes.”[Walbert, 2004]. According to this rule, Libertarian Gary Johnson has a chance in 2016 if he can win his home state of New Mexico [Wilson, 2016].

Now I’ve given reasons why your vote doesn’t count, let me address why it does matter.

South Africa endured many years of violence under the Apartheid regime. Many people and countries worldwide boycotted Apartheid, but the US government insisted on supporting the Apartheid regime, saying that while the US abhorred Apartheid, the regime was the legitimate government of South Africa. Then the Apartheid regime held another election. No more than 7% of South Africans voted. Suddenly everything changed. No longer could the US or anyone else say that the Apartheid regime had the consent of the governed. That was when the regime began to make concessions. Suddenly the ANC, formerly considered to be a terrorist group trying to overthrow a legitimate government, became freedom fighters against an illegitimate government. It made all the difference in the world, something that decades more of violence could never have done.

In Cuba, when Fidel Castro’s small, ragged, tired band were in the mountains, the dictator Batista held an election (at the suggestion of the US, by the way). Only 10% of the population voted. Realizing that he had lost the support of 90% of the country, Batista fled. Castro then, knowing that he had the support of 90% of the country, proceeded to bring about a true revolution.

In Haiti, when the US and US-sponsored regimes removed the most popular party from the ballot, in many places only 3% voted. The US had to intervene militarily, kidnap Aristide, and withhold aid after the earthquake to continue to control Haiti, but nobody familiar with the situation thought that the US-backed Haitian government had the consent of the governed or was legitimate.

You’ve Got to Stop Voting by Mark E. Smith

Whether your candidate has a chance or not, your participation in the vote directly demonstrates your “consent to be governed”. The politicians have a system of elaborate and arcane rules, which they deliberately devised to disenfranchise your voice. The political class cares far more about you checking a box than they do about which box you check.

“Boycotting elections alone will not oust the oligarchy, but it is the only proven non-violent way to delegitimize a government.”[Smith, 2012].

## Notes: Market Failure: An argument both for and against government (David Friedman)

I just attended the Young American’s for Liberty state convention yesterday in order to hear the venerable David Friedman speak. Below are my notes highlighting the main points of the talk. You may recognize some examples and positions if you’ve been following his work.

Economists studying market failure make legitimate arguments against laisse-faire, but those arguments make a stronger case against government. Let’s define market failure as those circumstances in which individual rationality doesn’t lead to global rationality. For example, suppose we were part of an army standing on the battle field. I think, there’s only a minuscule chance that it affects the battle if I defect, and I will almost surely live as the others delay the opposition by fighting. All of us execute that logic, we all run, and the opposition kills us. To take another example, I am warmed greatly by burning coal and only make a minuscule contribution to the London fog. But, it’s possible (though not always), to engineer around the failure with a change of the rules. For example, the arab’s deadlocked in the open desert making no progress to the nearby oasis as they insist on winning the “who’s got the slowest camel” competition. Economists largely assume that people act in their own rational self-interest and that’s generally the case.

But what about Public Goods? (aside: The government often produces private goods, such as the post, while many public goods are produced privately, such as education and libraries.) Let’s define a public good as one that’s open to consumption, where the producer cannot capture payment from the consumer. For example, the beauty of the Sears Tower, listening to an unencrytped radio broadcast, or watching a TV program. For some cases, the market arrived at a clever solution: couple the public good (radio program) with a public bad (advertisement) and let the baddies subsidize the good for the enjoyment of all.

There are often externalities, in both directions. Some the costs outweigh the benefit and others where the cost is less than the benefit but the producer can’t collect enough to make it worth the effort. For example, take a resturant, a movie theater, and a store. It might not be worth running any individual enterprise, as they impose foot traffic on neighbors (negative externality). But if they occur together, say in a mall, then the traffic is mutually beneficial to all stores (externality becomes positive) and the rent for a shopwindow captures some of that.

Not all problems are solveable with laisse-faire and the market result is often less than ideal (comparison to the ideal is how the market ‘failed’ even when the outcome was considered ‘good enough’ by the people). With perfect information, you might be able to obtain the ideal, but we are in very short supply of good dictators.

Democracy also has its failure modes. For example, voters should be knowledgeable and informed when they cast their ballots. But proposals are seldom transparent. For example, the Farm Bill is never advertised as a money transfer program. Plus, in a representative system, the voter has a double-indirection problem. First they must know how good/bad each bill is and then they have to know how the candidates voted. Often that’s unknown, because the candidate is new (actually, all upcoming bills are unknown). Given that the chances of changing the election are slim, how much should a voter invest in becoming informed? Not much, they should be rationally ignorant, with two exceptions. One, the think politics is fun and do it out of intrinsic interest and two, they have influence or represent a special interest and have a high stake in the result. In this market, we see concentrated special interests winning benefits over dispersed victims.

What about long term planning? In the market, why should I plant black walnut trees which won’t bear nuts until I’m long dead? Well, ten years from now I can sell them! to a person that doesn’t want to wait as long. In turn, they could sell years later to a still more impatient person. But the transfer needs strong and secure property right. I must expect that the field is still mine to sell after ten years. Politicians, in contrast, have very insecure property rights. They will often be out of office before the benefits of a bill become evident, and it might be the other party is in office at that time and claims all credit! So they have a strong aversion to paying large amounts now when the benefits are far away in the future. You can see that their rhetoric does not reflect their actual behavior, because they often promise without making delivery in that uncertain future.

So politicians tend to 1. Promote policies that sound good on the surface (easy to advertise), 2. enact bills that benefit special, concentrated interests at the expense of dispersed victims, and 3. take short-term actions. Finally, and what’s worse, they make decisions that have very widespread effects (externalities) without knowing the outcome. For example, a panel of judges making a decision about a vaccination program (for polio?) treated an annual recurring cost as a one-time payment, mis-pricing the program by a factor of 40, negatively affecting thousands of people.

We must conclude then, that Market Failure is structurally endemic to Politics and the theory of market failure is a better predictor of government behavior than it is of free market behavior.

The conclusion generalizes to everyday life. For example, for those with a spouse, who does dishes after dinner? There are two options: 1. one cooks, the other cleans, and 2. the same person cooks and cleans. Everyone chooses option 2, because then the cooker controls how much cleaning takes place (makes a meal with fewer dishes). Also, this is why we tell children to clean up their own mess, rather than the messes of others.

There’s also the silent student problem. When the instructor asks “does every one get it?” they never get a response. Mostly, because the cost of asking is to look dumb in front of everyone (but, rationally, in a large classroom, there must be others who didn’t understand), but also because the benefits of asking will be dispersed to everyone present (they might do better on the exam). One solution is to use a button on the floor, which can be inconspicuously pressed, activating a signal at the back of the class visible to the teacher, but not the students.

Or economics and law. Suppose a proposal that makes armed robbery a capital crime, assuming murder is already a capital crime. Many people might be for it, as “being tough” and providing a strong dis-incentive for armed robbery. But the economist will ask: Do you really want all armed robbers to murder their victims? Because if the cost of robbery is the same as robbery + murder, then I, as a robber, will surely kill my victims for the punishment (cost) is no different and there is a benefit that the dead won’t identify me to the police.

Questions.

What should we do? Given that rationally ignorant voters make decisions on freely available information, politicians complicate the issue by advertising bills with plausible deniability. For example, the auto tariff is about protecting jobs and hurting foreigners. The auto-workers union doesn’t ask for a bill that’s a direct transfer payment. How should we respond? One: change the body of free information. Spread more accurate information and ideas. Two: create alternatives. Run a business that competes, showing that the government-provided solution isn’t any good (quality or quantity).

What about replacing the government entirely? Dealt with this in 3rd part of Machinery of Freedom, discussing private security insurance firms that operate under the discipline of repeated interaction. Customers have their choice of firm.

With increasing productivity, what about ensuring work/jobs? First, we should think of jobs and workers as two distinct numbers, and then notice they are always very close to each other. Sometimes apart (depression era) sometimes close, but very strongly correlated (both increased over the last century). This implies they are in an equilibrium situation, so we should not worry too much. Also, look at the fixes: the racism inherent in the minimum wage (historical union). And be careful: if you give the government power to do XYZ, how will they *actually* use that power?

What about education? School voucher program implies that most schools would be comparable in quality. That’s not really different from being centrally managed. Instead, if we value diversity, we should remove single organizational administration and control, we should decentralize.

What about Rothbard and 100% reserves? It’s actually not desirable for a bank to have 100% backing, especially if it has other liquidate-able reserves. It could then, when faced with a run, sell the other assets in exchange for the backing material (e.g. gold or silver) and make good on the original agreement. Personally, would prefer some electronic, anonymous, cash-like system.

What about the incentives to incarcerate faced by private prison operators? Well, those same incentives are faced by state-run prisons. It sounds good to be “tough on crime” and the taxpayers foot the bill. Refer to David Skarbek’s book, The Social Order of the Underworld: How Prison Gangs Govern the American Penal System, to see how even outlaws have created an ordered society. Also, read Poul Anderson The Margin of Profit for an answer on “how to get people to stop doing bad things” (answ: make it unprofitable), and my book Law’s Order: What Economics Has To Do With Law and Why It Matters for an examination on the costs and benefits of a property system.

## Notes: Problems with Libertarianism

David Friedman gave a talk Problems with Libertarianism: Hard Problems (and how to avoid them).

1. What rights do you have against a criminal?
If your only right is to re-claim the stolen property, then at-worst the thief breaks even.
To what extent can we meter out determent punishment.
What’s the factor of retribution, why 2x? not 3x or 1.5x?
What if you only catch 1/10th of the thieves? Then 10x?
But that pushing the guy caught for the crimes of those not caught.
Also, the number of people caught is a function of how much spent to try to catch.
What if a mistake is made? How much trouble to avoid making them?
2. What are you entitled to do to defend your rights?
Capital punishment for petty theft?
3. Human shield problem.
Can you shoot back, and risk killing the innocent shield?
Can the voluntary defense fund aim nuclear weapons at Moscow?
Possibly killing innocent victims (more so than you) of the Soviet Union?
If it’s acceptable to place run roughshod over innocents in attacking the aggressor, then it’s acceptable to draft.
4. Absolute property rights.
Trespassing photons across absentee-held land.
You think they don’t damage, but I, the owner, gets to decide that.
Allow you to breath in, but not out, because I don’t like the CO2.
5. Distribute the risk of injury should I crash my airplane?
I get the benefit of flight, at cost to you without your permission.
6. Property in land, not derived from owning self+labor.
I could walk across before you build up the house and path, so my use is not in conflict with your labor.
7. Very large part of land in the world is stolen.
8. Public good problem.
If something is desirable, then market will provide. Can only say Maybe.
def: good that producer cannot control who gets it, ex: radio broadcast
Combine the good (pos value to customer + pos cost of production) with another public good (neg value to customer + neg cost of production), ex: adverts
What about national defense? (defense against nations) Hard to stop missile in flight by determining if target has paid for defense.
Is answer to aggress the funds, or to surrender?
soln: Assuming problem doesn’t exist.
Soviets have no interest in attacking, only have tanks to prevent us from doing so.
soln: Somewhere there’s a proof that market will provide.
Nobody’s found it.
soln: It’s a lifeboat problem.
Still have to find the answer, we do live on a spaceship after all.
soln: The is-ought dichotomy.
But then in some way you’re defending the ability to do whatever is necessary.
soln: Pooling money.
The good is worth X but whether I spend is only a fraction of the funds and I receive the benefits regardless.
9. Privatizing the government property.
soln: sell it. But if government doesn’t own it, what right have they to sell it?

You can also avoid the problems by changing the subject.

## Bidding to establish Terms of a Contract

As a preliminary, I feel obligated to mention that writing a contract only protects yourself on paper. The real world has such complexity, and with innumerate contingencies, that any contract will fail to enumerate them all. Establishing an explicit contract often also cements distrust between parties, due to its impersonal nature. The contract just gives a written record of what the parties agreed to, so that, should the end up in arbitration, others can more readily see the terms of the agreement. Because of these factors, writing a contract is a costly affair.. So, what should go into it?

Suppose two parties, even after recognizing the costs, still wish two write a contract. They agree on many standard items and write them down (or borrow them from a template), but the parties still have some remaining details about which they disagree. Each has an interest in investing a some time to sort out these details in order to avoid unpredictable future conflict. But how shall they obtain an agreement on what actually gets written down?

I thought of one approach: holding a silent auction. For every issue on which they have a disagreement, they can write down all the mutually-exclusive options and hold a silent Vickrey auction.

## The Problem and Its Solutions

If we were living in a perfect world, the business logic would be separated from the presentation layer. Since Rave sits atop a rich GUI, where event handlers can execute arbitrary code, there exists a strong temptation to put business logic in the presentation layer. The fact that we code both parts using the same language (C++) makes this temptation doubly hard to resist. Indeed, sometimes a clear cut separation doesn’t exist. So we shouldn’t find it at all surprising that our founding coders may not have kept up a wall of separation between GUI and Business Logic.

Let’s walk through an example that I have adapted from Martin Fowler’s post on GUI Architectures.

Suppose we have a system requirement that the GUI must display a red box when a seat is disconnected, a green box when connected, and a yellow box for a slow connection. Suppose further that the application already uses an integer to represent the connection state, and presents it through the function linkRate(). It ranges from 0Mbs (disconnected) to 1000Mbs (full connection), taking various intermediate values depending on a measured traffic rate (not just the OS ethernet link state). The green box represents any measured rate above 700Mbs, while the red box represents any rate below 5Mbs.

Where should the logic for choosing the box color reside?
Where should the listing of boundary values for each category reside?
Where should the listing of colors reside?
If you had to write tests to prove your solution worked, would you change your mind on where to place that code?

Logic Placement Description
GUI The GUI contains all the smarts. It reads the value of linkRate() from the application and then performs its own calculation to determine the color.
Shared The GUI and application share responsibility. The application provides a linkRateState() that presents an enum which the GUI then maps to a color.
Application The application contains all the smarts. It does the heavy lifting and provides a linkRateColor() method that tells a really dumb GUI what color to show.

For some variation of value preferences, all of the above can be reasonable decisions. I have a bias towards testing making the GUI as easy to test as possible. You might think that idealism would incline me to favor the dumbest GUI possible, and you’d be right in most cases, but I want to draw out some reasons to make an exception for this example.

#### The Case for the Dumb GUI

Mostly I want a dumb GUI because testing it is very hard. To test the GUI, I must launch it within a harness that intermediates all the events, introducing programmability to events like clicks, drags, and keyboard presses. The harness requires a full simulation of the application, including connections to external services (database, file system, SCUs, etc). Finally, at least with squish and RAVE, the test scripts execute at glacially slow human speeds, sleeping for entire seconds to allow for menu animations and other GUI renderings.

Having the dumbest possible GUI would mean having a presentation that so incredibly lightweight that it would be very improbable to get wrong. When the application tells it what color to show, the GUI has very little opportunity to do wrong. The mapping logic of linkRateColor() would have a unit test in the application, ensuring conformance to system requirements. With a thin enough GUI, I wouldn’t care that it didn’t have automated tests.
But placing linkRateColor() in the application muddies the purity of the application. For now it must always and forever include a link to whatever library provides QColor. I can no longer build the application without some GUI library. If I want to re-use that component, I drag the dependency along with it. And, finally, no part of the application actually uses linkRateColor(), it only exists to support the GUI.

#### The Case for the Shared Responsibility

I have nothing to say here but “eeewww gross”. Unless there is an application-side consumer for the linkRateState() it’s not worth coupling the GUI and application with such a specific API. Should the specification change the details about the boundary values between colors, then both the GUI and the application will need updating. We shouldn’t use designs that increase our maintenance overhead.

#### The Case for the Smarter GUI

If the application never has a need to know the boundary values for the different link rates, then we can assume that those values represent a specific presentation requirement. Given that circumstance, I favor placing the logic into the presentation layer.

Yes, this solution increases the GUI’s responsibilities, making it more complicated to test. Counter-intuitively, the increased testing difficulty of a more complex GUI has pushed me to advocate for making the GUI a stand-alone piece. The situation just serves to make my next point, about a stricter separation between presentation and application logic stronger.

## The Wall of Separation (between GUI and Application Logic)

In an ideal world, the business logic carried out by the application and the presentation logic carried out by the GUI remain strictly separated. So separated, that we can pull apart the two pieces and test them separately. We can even build a second GUI (for a new customer) without impacting the underlying application. With this separation, the application acts as a data Model while the GUI(s) merely present a View of that data.
For testability purposes, let’s pull apart the two pieces an envision a wall between them. The only communication link through that wall is an API, depicted as a network socket. The application (network server/data model) responds only to specific messages (requests for and updates to data) sent over the socket. It keeps the GUI informed about changes by emitting other messages (events).

The clear separation between application and GUI serves dual purposes:

1. It makes us think harder about which piece (GUI or app) should receive new logic
2. It allows tests for each piece to remain laser-focused on that piece without getting distracted by the other parts of the system.

#### Testing the Application

To test the application, we simply fake the GUI. Because of the separation I’ve made here, it amounts to just implementing a network client that generates a sequence of data updates or requests, and asserts that it receives expected data-update events and delivery of requested data. In a different world (the real one), where the API exits as method calls instead of a network socket, we create a headless driver, that makes the calls and receives the events. Even more granularity can be achieved by single-stepping the event loop (when that makes sense), to assert that certain events do NOT occur.
In our example, we have the fake GUI assert that it receives a linkRateChanged() event, after the test, using an internal update function, modifies the linkRate variable. If the GUI can set the linkRate, then we can also test round-trip in 3 steps:

1. Have the GUI send the update data request
2. Step the application event loop
3. Assert that the fake GUI receives the expected data changed event.

In both circumstances, we assert that the application generates events according to a specified protocol. With a large enough suite of individualized tests, we cover the application’s behavior for all the actions the GUI can take. When we miss an action, we simply record it in a new test as an expected event/response sequence.

#### Testing the GUI

To test the GUI, we simply fake the application. A test harness drives the GUI from one end, clicking and dragging on widgets and buttons, while the application that it links to provides a pre-programmed series of responses. If we generate the GUI events directly, e.g. by calling on the event handlers for specific widgets, we can even drive the GUI in a headless environment (by virtualizing X11). The tests remain focused on accuracy of presentation.
In our example, we have the fake application emit a linkRatechanged() changed event, and assert that the GUI updates the color according to presentation requirements. If the GUI can set the linkRate, then we can also test round-trip, using a similar 3 steps:

1. Drive the GUI to go through the update link rate dialogs/widgets.
2. Assert that the updateLinkRate() event is received by the fake application and respond with a pre-programmed linkRateChanged() event
3. Assert that the real GUI updates the rendered color.

In both circumstances, we assert that the GUI performs renderings according to the events it receives from the fake application. Again, a large enough suite of individualized test covers the presentation layer for all data states the application can take. We still record any missed behaviors into a new test taking the form of an expected event/render sequence.

## What the Wall of Separation Achieves

Separating the GUI from the application, and treating it as a View or presentation layer only (with the application taking the role of a data model) gives use the ability to separately each pieces. The wall itself represents an expected set of behaviors to command/response stimuli. In ordinary implementation we have direct C++ API calls, but that just muddies the idealized separation, and motivated me to start out with a network messaging description. Conceptually, testing the GUI can be approached with the same techniques as testing a client/server implementing a network protocol. If we clearly state the expected behavior, then each side of the fence merely has to uphold it’s end of the protocol.

Yes, the separation probably means more tests. But, those tests will be smaller, faster to execute, and easier to write and maintain. When we do perform whole-system testing (which is always rarely relative to the automated protocol testing, because of the costs involved), it will catch use-cases of the interaction not already covered by the piece-wise tests. However, a record of the command/response sequence in each failing whole-system use-case, can be rolled backing into separate piece-wise automated tests, one for each side.

Ultimately, our goal is to catch bugs earlier by exercising the behavior protocol of each side separately. By working toward that goal in this way, we can also ensure that we meet our system requirements by encoding them into automated behavior tests exercised against both sides of the wall.

## My Career Forks

Over the past couple months, I’ve busied myself with finding offers of other employment. IMS (now Zodiac Inflight Innovations) has not given me the career growth that I initially anticipated. I find that, though they are getting better, management has been fairly schizophrenic. All the time chasing to put out fires and very little of the time investing in software quality practices that prevent such emergencies.

I interviewed with Google twice. Once last month at the Irvine Office for a position as Sr. Software Engineer. They decided not to hire at that time, because my performance during the interview was “on the edge.” However, their recruiters reached out later to have me interview for a Software Engineer in Test position. I subsequently read the book “How Google Tests Software” and was quite impressed with the specialist role. It’s more of a framework and tools builder for the other engineers, all with the goal of improving quality.

During the time between interviews, the kind fellows at JobSpring Partners, who helped me get hired by IMS (now Zodiac Inflight Innovations), followed up to discover that I was indeed unsatisfied with the career growth opportunities in my current position. They connected me with Fisker Automotive, which is rebuilding a team of software engineers so that they can rewrite the infotainment software that controls the Karma.

So, I stand at a cross-roads in my career. Do I choose a smaller company and pursue technical leadership, or choose the well-established and pursue technical skills growth? I did an analysis, to help myself decide. I would be comfortable with either choice in everything but the “Daily Work” category.

Daily work
fisker: system designer, software architecture

Skills improvement
– google: software engineering, how to program “at scale”

Personal Interactions
– fisker: upper management

Social Capital
– fisker: customer interaction

Technical Capital
– google: lots already in place, but must find a project to exploit it
– fisker: little, have to organize it all myself

Industry
– google: has wide variety of projects (incl. computational finance)
– fisker: automotive, embedded, gui design

On Paper:
– google: I have google on my resume, with crazy job title (they let you make one up)
– fisker: I put “declined an offer from google to work for fisker” on my resume

The Exit