Last week, I attended “AI” UK – which was organised by The Alan Turing Institute and where I served in a minor role on the program advisory committee – and a workshop on Responsible “AI”, co-organised by some fellow fellows of the Software Sustainability Institute. At both events I got the chance to talk about the different ways of implementing participatory approaches into how we do scientific research – and why doing so is important. And, somewhat continuing similar conversations at the The European Festival of Journalism and Media Literacy, there was increasing talk about doing and informing data science from below.
At “AI” UK, one of the sessions in the very first slot was titled Nothing About Us Without Us. This panel included our own citizen science work on sensory processing and autism with AutSPACEs (represented by Georgia & James), as well as representatives of the People’s Panel on AI, which Connected by Data did in November last year. Their deliberative process included the public, represented by 12 participants, in envisioning “AI” policy making. Jointly, the panel outlined how participatory methods can be used to allow us to move from being data subjects into agents that can influence how decisions about data and its use are being made.
This sentiment was mirrored, in slightly different form, at the session Data, Labour and “AI”. The panel was moderated by journalist Billy Perrigo, who discussed with sociologist Karen Gregory, Matt Buckley of United Tech & Allied Workers union branch, and Mophat Okinyi – who has worked in outsourced content moderation and is one of the founders of the African Content Moderators Union. Mophat shared his first-hand experiences in doing content-moderation and how tech companies outsource this kind of work, in particular to the global south, while offering little in terms of pay and worker safety. The panel discussed how this type of “hidden outsourcing” – be it for content moderation of social networks or training of language models – is done very deliberately, to hide the actual labor that goes into enabling any “AI” or automation.
This type of hidden labor is in a way just an extension of the longer tech history of fauxtomation, in which “automation” is nothing but hidden workers doing the actual work at the end of the day. Karen Gregory noted that given the fact that there are plenty of workers with very intimate knowledge of how this “automation” works, it is bizarre that we keep talking to executives and sales people about the risks of “AI”, and not those with that first-hand experience from the other side: Relating this to her research with food delivery gig workers, she highlighted how those actually working within these technical systems are the ones that become the experts of these fields, effectively becoming ethnographers of their work. But their viewpoints remain marginalized, partially because anthropology is kinda out these days, but also because colonialism and its friends aren’t dead: Their work is done in the global south and then it also looks a lot like care work, which was historically and still remains highly undervalued.
To me, all of these factors make the current general purpose generative “AI” hype even more insidious: It’s not only that the technology itself mostly remain a solution in desperate need of a problem, or that it is one that will be used to foster further “deskilling” (to paraphrase Sci-Fi author Adrian Tchaikovsky during his “AI” UK session: Art is a craft that needs to be improved and outsourcing any part of your creative work means you are not improving your skills). But it’s that most, if not all, of these commercial tools are created using highly exploitative practices – from getting the data from questionable sources with even more questionable consent all the way to the labor practices that go into preparing those models. Which means that there’s no way to ethically use any of these tools/models in the first place. And I don’t think there will be a way any time soon – unless we (analogously to Sandra Harding’s Sciences from Below) start doing “tech from below”, beginning with listening to those who will be affected by these technologies and developing it collectively.
]]>Eugenia and I made our first zine, available here, for our ongoing documentary photography project as if the sand were stone. The project focuses on how small villages & hamlets across the Argentinean Pampa are adapting in response to a shift in urbanism and transport – from rail to road.
For our first zine, we selected 24 images, all made on black & white film and also created a small website for the project. On that page you can also find more background on the project, where those villages are etc.
The full album of images is on Flickr.
]]>Get a copy of the first zine of the project
As if the sand were made of stone is an ongoing documentary photography project that we (that is Eugenia and I) are working on. The project covers how small villages & hamlets across rural Argentina – in particular the Argentinean Pampa – are changing and adapting in response to transport shifting from rail to road. The idea for the project goes back to April 2023, and we have been working on-and-off on it since September 2023.
The project brings together traditional analog media (i.e. black-and-white film photography and wet printing) and the digital (i.e. through the project website & data visualisations). Beyond being distributed in zine form, the images made as part of this work are also openly licensed and available on Flickr.
]]>Last week I attended the The European Festival of Journalism and Media Literacy, to join a panel on Artificial intelligence – Which skills do I need?. Moderated by Lee Hibbard, the panel consisted of writer Maria Farrell, computational linguist & YouTuber Letiția Pârcălăbescu, and me.
After a brief intro talk by Letiția – covering what ”AI” is and can (and more importantly can not) do – our discussion quite quickly drifted towards the issues of hype and power. In this, it also mirrored a point made in an earlier panel on ”AI” & journalism, in which Divina Frau Meigs argued how ”Artificial Intelligence” in itself is a misnomer, as there is nothing intelligent about it, instead it produces “Artificial Information”. This critical focus on power maybe isn’t too surprising, given how AI is mindlessly and highly problematically thrown at us these days, not only by corporations but also institutions.
Which isn’t even that new, as arguably even the British Post Office scandal that goes back to the early 2000s, in which “faulty software” (no one bothered calling that one “AI” yet) screwed up accounting, leading to hundreds of people being wrongly accused of theft & fraud. But of course, that doesn’t mean that anyone has learned a lesson from this: In the Netherlands, algorithmic decision making led to 10,000s of people being penalized over automatically generated fraud suspicions over child care benefits – based on dubious risk indicators. And in France the Caisse d’allocations familiales (CAF), part of the social security system covering family & housing benefits uses a similar approach and keeps doubling down on it despite all criticism. And in the US the National Eating Disorder Association decided to fire its helpline staff and replace them with an ”AI” chatbot when the staff unionized.
Jointly, these examples give us a great overview of the problems with deploying any automated decision making tools: Given that any predictive algorithms are trained to reproduce the most likely (or average/mediocre) output, based on all the data that was feed into the system to generate those future predictions. And thanks to that, these tools can only reproduce all of our societal problems, but faster and at scale. In case of the Dutch child care benefits, the system’s risk factors included having a dual nationality and having a low income (you know, the thing that makes you likely to request benefits in the first place…). And that’s not because “the algorithm” decided this, but because the tax authorities fed their own “blacklist” data into the system, which already before algorithmic decision making focused on “people with ‘a non-Western appearance’“. No amount of “de-biasing the training data” would fix that racist behaviour, as that seems very much wanted by the people using those systems. Typically the purpose of a system is what it does refers to unintended consequences, but it’s hard to even speak of unintended in this case.
This does not mean that using automated decision making doesn’t change anything. As the use of these tools provides a way of ‘empiricism-washing’ or giving decisions the veneer of being scientific or ”more objective”. Beyond the Dutch tax authorities, this also holds true for the British Post Office or the French CAF. The director of the latter outright claimed that their ”algorithm is neutral” and would be “the opposite of discrimination”. But of course the reality is far from that, with CAF’s approach also targeting people with low incomes, living in ’disadvantages neighorhoods’ and those with disabilities. This move towards algorithmically made decisions thus provides a way to further deepen epistemic injustices & violences, as any criticism on decisions can be swatted away by “computer says no”.
Lastly, the example of the eating disorder helpline, shows what Cory Doctorow described as one of the big risks with “AI”: We’re nowhere near a place where bots can steal your job, we’re certainly at the point where your boss can be suckered into firing you and replacing you with a bot that fails at doing your job, which is exactly what happened when that chatbot gave dangerous advice to people looking for help with eating disorders. And the reasoning behind deploying automated decision making in institutions is quite similar: Maybe there is less overt squashing of labor rights, but it is equally hidden in the promised “efficiency improvements” and “savings”.
Between automating bad decisions and the increasing lack of recourse against those decisions (can’t argue with a computer after all), it’s easy to see how the current use of automated decision making contributes to hollowing out our institutions in the name of efficiency and thus undermining our trust in them. That’s why the ”AI skills” that we came up with related to the ability to critically question power in relation to these decision making tools, including: How are these tools being developed? By whom and for what purpose? And, in the service of whom are they being deployed?
And based on our panel discussion, answering those questions in practice might include supporting independent journalists & journalism, being politically engaged and consider joining your union – in case your boss might be suckered into the AI hype.
p.s. Yesterday evening I called my mom, telling her of the panel. We ended up chatting about some AI use cases that she might already interact with. We talked about how the autocorrect/autocomplete on her phone would be one that she encounters a lot. Her reaction was “I see, so it can be something useful, but you need to always double check, as otherwise it just makes up the weirdest things” – thus accidentally paraphrasing that alleged 1979 IBM slide: A computer can never be held accountable therefore a computer must never make a (management) decision.
]]>If you have spent any time on Mastodon and are intested in data visualisations, geography, RStats, or any of the intersections thereof, you are likely to have seen the stunningly beautiful 3D map visualisations that @terence has been creating with the rayshader
package. I have been itching to get an excuse to play around more with this myself, and finally found a lovely one within the context of another little project.
We – that is Eugenia and I – are working on a small documentary photography project about the little villages/hamlets that are spread out over the Argentinean Pampa (that is the region, not just the province). We started toying with that idea back in April last year, and had the chance to make a lot of photos on a trip to Argentina in September.
What makes these towns really interesting are both their origins and today’s ongoing decline: The foundation of many of them goes back to the colonialisation of Argentina in the 1800s – when the growth of the Argentinean railway network enabled moving into more remote areas. Thanks to different government programs, plenty of boomtowns were created in quite regular intervals on the developing railroad network. But with the emergence of the road network alongside with transport via car/trucks - and the privatisation of rail network - the trains stopped coming for many of those villages, leading to an ongoing decline in population.
As part of our documentary project we are creating a little zine with a first sub-selection of some of the photos. We are also working on accompanying it by a small website (more info and links to all of that once we got the printed zines)! For this website we thought it would be nice to have some maps showing the region and where the towns are.
Which finally brings us back to the mapping excuse, as it seemed like those 3D raytracing maps of the population density could quite nicely show where those villages are and how they are distributed throughout the region. Looking at the map of the province of Santa Fe above, that seems indeed to work quite well! Based on population data from kontur, one can not see nicely where settlements are, but also a quite regular pattern that spreads more or less diagonally across the map!
Zooming in a bit more, into the department of General López, we can see that pattern on a smaller scale, with one “band” of settlements moving from bottom-right towards top-left, and another one moving from bottom-left to top-right! Both of them follow nearly straight lines, mirroring the railway lines these places are located on.
I really enjoyed creating those little visualisations that will nicely go alongside the photos. Thanks to a lot of great tutorials and hints online, it wasn’t even too hard to create those from scratch: Erik Jensen has a great write-up on how to create population density maps with rayrender
, which itself is an adaptation of a video tutorial by Spencer Schien that also has online code documentation.
If you want to reproduce these figures here, I put my code into a gist. If you want to adapt them for your own countries/regions of interest, data that can be used for this – including shapefiles for filtering to country or administrative borders as well as the population data – can be found on The Humanitarian Data Exchange.
]]>The Wassail or orchard-visiting wassail is an English tradition in which people bless their fruit trees to ensure a good harvest come autumn. Typically, these celebrations and blessings are accompanied by song and dance – including Morris dancing – and bring the local community together.
In mid-January, a Wassail was held in the East Greenwich Pleasaunce, a small park that used to be a local graveyard for sailors from the Greenwich Hospital.
A few years ago, the community planted different fruit trees in the park. As part of the ceremony the trees are blessed both by spreading some cider/apple juice into the trees, but also by placing pieces of bread or toast onto the branches of the trees.
The full album of images is on Flickr.
]]>Flickr as one of the old school photo sharing communities celebrated it 20th birthday on February 10th, 2024. To celebrate, there were photo walks in many different places around the globe, including in London. Beyond lots of Flickr themed swag, our photo walk also took us into the British Museum.
The London Flickr group has a thread where you can find links and images of a lot of the other attendees.
You can also find the full album of my images on Flickr, where they are openly licensed as usual.
]]>Happening just a week after the London Pride event/parade, the London Trans Pride gathered more than 25,000 people for a march from Trafalgar Square to Wellington Arch at Hyde Park Corner – despite a rainy start.
The more mainstream crowds attending the London Pride have turned the event into a party-like spectacle. In contrast, the London Trans Pride consciously sticks to being a political protest march.
You can find the full album of images on Flickr, where they are openly licensed. You can also buy hand-made prints of some of these images – that I made using the traditional darkroom printing.
]]>In April 2023, I went to Argentina with the main reason being attending the 7th CSVconf in Buenos Aires. But of course we also went to visit Eugenia’s family over Easter. And we also managed to squeeze in some Lecturers Without Borders while there, even one in German for myself!
Between all of these things we also found some time off for sightseeing. So we headed up north to the Iberá Provincial Natural Reserve to look at the quite unique ecosystem there (and to finally see some Capybaras)!
You can find the full collection of images on Flickr, where they are openly licensed. You can also buy hand-made prints of some of these images – that I made using the traditional darkroom printing.
]]>In October 2022, Eugenia and I spent a few weeks in Laos both in Vientiane and Luang Prabang. A big reason for going was that Eugenia gave some workshops in Vientiane, but we also gave some school lectures through Lecturers Without Borders while there. But by chance we also scheduled a trip to Luang Prabang to coincide with the local Boun Ok Phansa celebrations – the end of the three-month Vassa observed by Theravada practitioners, often also named “Buddhist Lent” in English.
Due to the COVID pandemic, 2022 was the first time in 3 years that the traditional celebrations could happen. Most temples and houses in the city are decorated with colorful lanterns and candles.
As part of the Boun Ok Phasa celebrations, large floats – illuminated by candles – are built which move in a parade through the town center of Luang Prabang, in front of the former royal palace and many of the adjacent temples. Besides the floats, schools and other groups walk with lanterns or perform music and dances.
You can find the full collection of images on Flickr, where they are openly licensed. You can also buy hand-made prints of some of these images – that I made using the traditional darkroom printing.
]]>