Dark Matter: 27 September 2017

First, a quick note:

From the outset, Dark Matter has been an exercise in proselytizing. I think more people should read Ben Bashford on music as software. I want more people to consider how the blockchain transforms what it means to be a street vendor in Kabul. Design principles, better app onboarding, and even other people’s newsletters are the kinds of things I wanted to spread.

Every week, a few thousand people click on a Dark Matter link. I’ve no aspirations to build a revenue stream on top of those clicks, no Blue Apron code to share with you. Instead, I’m asking this:

If you enjoy the newsletter this week, share it with someone else who might find something worthwhile in it. tinyletter.com/ianfitzpatrick

I’m not trying to build a brand, just a larger island of misfit toys.


This week in pipes you see, pipes you don’t:

Matt Webb begins with this video of self-guided Chinese warehouse robots and posits (3min):

It used to be that the pipes were visible, and the packets were dumb but had addresses. The junctions were smart and did the work. We call them routers. Here there are no routers and there are no pipes. But instead, autonomous packets.

And that’s the fascinating part of the future we’re peeking at: the news, music, products, and facts we want find their way to us — not through channels or pipes, but with a built-in, self-steering logic.

* * *

Spend a few minutes with Ben Terrett on the underlying service models of smart cities (4min). Ben earns points for referencing Schelling Points, which are the new Overton Windows.

* * *

The possibilities presented by a universe rich with super-low-cost Linux boards (2min) are every bit as compelling as a universe replete with Elon Musk products.

* * *

Buried in a post on design challenges for mixed reality development (7min) by Greg Madison on the Unity 3D blog, this on the notion of ‘intention amplification’ (a new term to me):

By centering design around a user’s intent and using intelligent objects that respond to that intent by modifying themselves, our thought process is not limited by physical laws, but rather will allow us to achieve a new freedom.

Spend a few minutes with the entire piece — or better yet, start at the very beginning.


This week in notes from the department of legality & compliance:

I had lunch last week with my dear old friend Mike Sullivan, who once remarked that:

“In Japan, everything is illegal until it’s legal. In the USA, everything is legal until it’s illegal. It’s a compliance culture.”

It was significant enough an observation that I felt compelled to jot it down at the time. I was reminded of it this week in reading Giles Turnbull’s piece on permission (4min), as told through the ‘It’s OK’ prints made by the Government Digital Service:

I think what’s missing in some organisations is explicit, clear permission.

There are leaders who don’t realise that there are teams waiting for permission to work in different ways. There are teams who hear conflicting messages from different leaders about what’s allowed and what isn’t. This lack of clarity is slowing change down.

This mirrors my own experience, particularly inside organizations with younger, less-experienced teams.

* * *

I’m particularly drawn to Lara Hogan’s explanation of language and framing inside her broader, and fantastic, piece on working toward ongoing compensation and promotion equity (9min). This is really smart (and it absolutely matters):

When describing the statistically significant difference in rates of promotion to those leadership groups, I chose my words carefully. Rather than “women and nonbinary people get promoted” or “earn promotions”, I used “we promote women and nonbinary people more slowly”. Because, after all, it is the group of managers who are doing the promoting at an unfair rate, rather than women and nonbinary people not earning the promotions as quickly.

* * *

18F has published a brief guide to giving and receiving feedback (4min). Share it broadly and vigorously.

* * *

This looks fascinating: a study in the Journal of Cross-Cultural Psychology that explores the propensity of three year-olds to ask for help in completing tasks (note: full study paywalled) finds distinctly different patterns in the seeking of outside expertise among children from Japan, Canada, and the United States. Tip of the hat to Andreea Nastase​ for the link.

* * *

via James Boardwell, Lauren Kelly’s explanation of nudges and the role of choice architecture in the service design (or lack thereof) in Uber’s surge pricing (6min), on the Dura site, is wonderful. Spend a few minutes with it. The entire Dura site is lovely.

* * *

5 minutes well-spent: Jeff Guhin’s piece on the interaction models from which his professional (and non-professional) experiences borrow:

There’s a problem with treating the world we encounter like an ethnographer, and it’s helped me to realize that, as a sociological ethnographer, I have five different ways I can approach the world. Here are the kinds of interactions I’m interested in: (1) surviving, (2) completing, (3) understanding, (4) engaging, and (5) correcting.

It’s really fantastic, and broadly applicable across walks of life.

* * *

Elizabeth Churchill — Director of User Experience at Google — posted a heady, frequently surprising, piece to the EPIC blog this week, on ‘new data dialects’ (11min). An especially tantalizing selection:

These ideas of ethnomining and of reading the logs and following the traces, and of interviewing databases, triangulated with more “traditional” ethnographic methods like interviewing and participant observation, have been very powerful in my work and in the work of my teams… at eBay, Michael Gilbert and I combined detailed behavioral log trace analysis with data visualizations of account holders’ search practices. Interviews revealed the shopping habits and patterns of consumers looking for bargains. The insights from this work would not have been possible with interviews alone, nor from purely studying behavioral logs, nor from aggregates like “daily actives” summaries. Our analyses convinced our product counterparts to think beyond “the user” as a single entity, perhaps a single person, and instead conceptualize a social entity—an example being multiple people on one account, or perhaps a single person with multiple accounts trying to maintain boundaries between social roles.

One of the benefits of supplementing traditional qualitative research with the ethnomining is the capacity to give shape to things like the ‘social entities’ that Churchill calls out. Absent the data, these behaviors are too-frequently dismissed as ‘edge cases’ not meriting investigation.

* * *

Speaking of edges, Martin Weigel dropped a white privilege mixtape (2min) this week.


This week in the identity politics of our AI overlords:

“Information bottleneck” (9min) looks like it might be an important idea in both neuroscience and AI over the next decade, which makes it worth at least a few minutes of your time. Natalie Wolchover in Quanta:

The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts. The bottleneck could serve “not only as a theoretical tool for understanding why our neural networks work as well as they do currently, but also as a tool for constructing new objectives and architectures of networks.”

* * *

The most profoundly difficult thing I read this week was Shreeharsh Kelkar’s piece for Scatterplot on early developments artificial intelligence, post-foundationalism, and and Michal Kosinsky’s much-derided “gaydar” study (9min), which claims to have trained a machine to tease out sexual orientation from a library of facial images. This, among several graphs, rings true:

In what I have found to be one of the best descriptions of what it means to do technical work, Phil Agre, who worked both as an AI researcher and a social scientist, points out that AI researchers rarely care about ideas by themselves. Rather, an idea is only important if it can be built into a technical mechanism, i.e. if it can be formalized either in mathematics or in machinery.  Agre calls this the “work ethic”.

The whole post is worthy of your time.

* * *

Erica Virtue has a easy-to-follow, razor-sharp piece on using AI in the Facebook Recommendations design process (8min), along with the best name ever.

* * *

From a Wired UK profile of Finnish data scientist Harri Valpola (8min), this gem:

Valpola’s method is simple: “The best way to clean dirty data is to get the computer to do it for you.” His first attempt was revealed in a paper published in 2015, which described a ladder network: a neural network that trained itself to deal with complicated situations by injecting noise into its results as it went along, like a teacher keeping her students on their toes by throwing mistakes into a test.

* * *

This, from Corin Faife’s fantastic article for How We Get to Next on access to genetic therapies and the socio-economic mechanics of CRISPR (13min):

As Kozubek has noted, a handful of American insurance companies have already issued policies that specifically exclude gene therapies in order to avoid bearing the cost, a move that could set a precedent across the industry. In the U.S., sickle-cell anemia most commonly afflicts African-Americans and other communities of color, which tend to be poorer and have worse access to healthcare than less-affected communities. (Princeton anthropology professor Carolyn Rouse has argued that “sickle-cell disease funding is a form of social justice for blacks as breast cancer funding is for women.”)

* * *

I love being alive in 2017: The Ambient Shipping repo on Github “contains utilities for capturing AIS messages broadcast by passing ships and then joining them with public data sets that reveal what the ships are carrying.”

Until next week.