March 26, 2020

What print's Industry 4.0 update means for publications

Despite the focus on digital media, and especially publications, print is here to stay. But what’s certain is that in the future, it will occupy an increasingly specific place. We look at how the fourth industrial revolution has played out in updating the print industry and helped it stay relevant.

How print is changing for the better

Industry 4.0 refers to the further digitalization and automation of manufacturing that accompanies the fourth industrial revolution. The first revolution started with water and steam, the second with mass production and electricity, the third with the widespread adoption of computers.

In this fourth revolution, the use of “smart” technology means improved monitoring, simplification, and optimization of the process. This technology, in turn, builds on innovations in print’s component technologies, such as greener inks, papers and printing practices. Both halves combined and a tweaked (but still essential) human role work together to increase profitability, and reduce waste while handling increasingly complex demands.

These changes are also triggered or reinforced by big policy changes, as demonstrated by global children’s publisher Scholastic, whose diverse output means relatively more resources from colored images and paper, and special fonts for words.

And just like the notorious wastefulness of fast fashion companies and their almost monthly collections, other groups of big players like textbook makers (what could be called Big Textbook as you might Big Pharma) have long been criticized for their model of updating print editions every semester to protect their bottom line. That said, Pearson, one of the five biggest American companies, has since gone digital-first in response to changing demands and habits.

Ideally, this one-two combo of a technological revolution and improvement of industry practices means things that should be printed can be, and things that would be better off digitalized are not printed.

Why it’s still going to be needed

With or without pressure to innovate in sustainable ways, our continued need for physical experiences is not going anywhere and neither are the printed materials that enhance that. Even if we exclude the “main attractions” of beautiful labels and packaging, printed instruction manuals and marketing materials will still make up a large part of the physical product experience.

However, outside the product context, print will still be vitally important for publishing due to its still largely permanent nature (unless of course, Toshiba’s innovative erasable printing technology becomes more widespread).

We’ve written before about the importance of slow journalism — where the emphasis is on accurate and detailed versus immediate reporting. The resources involved in print publications, not to mention the irreversibility, mean that far greater care is involved: you can make near-immediate updates to web articles and ebooks, but there’s no making up for shoddy fact-checking after printing thousands of copies of a book. Delayed Gratification, for one, is a quarterly print publication by Slow Journalism that reports on events after the dust has settled and the true impact becomes apparent.

And even beyond preserving and relaying information, so long as print continues to hold cultural capital and authority for this very permanence (which it does), it serves as a means of expression and gives a voice for the most independent of publishers, something we saw first-hand when we moderated a zine fest near our old office.

The Takeaway

Every time we physically throw out any piece of paper bigger than a receipt for lack of a recycling bins, that tangible experience might give us a lot more pause than repeatedly downloading, uploading, copying and moving hundreds of gigabytes of data. It could be that the visceral reaction of “think of all those trees” made print an especially big target for sustainable innovation, even as we ignored the impact of ‘invisible’ digital technologies like AI or even e-readers.

That said, the progress up until now is welcome because it’s meant more options. As MAEKAN continues to experiment with product, the availability of new innovative production methods allows us to do something good while consciously weighing our environmental impact. This means we can stay excited when we consider what our first MAEKAN print publication might look and feel like when we get to it (we’re always open to suggestions, by the way).

 

March 16, 2020

You Feel Us? — Let's Be Wary of Emotion Recognition AI

With agencies and companies increasingly adopting AI into their workflows, one of the possible uses could be using it to “read” people and how they’re feeling. We look at how it does this, but more importantly, why this is a markedly bad idea for now.

How AI “Reads” Us

When AI is used in emotional recognition, it reads many of the following metrics in real-time:

  • facial expressions
  • voice patterns
  • eye movements
  • biometrics (heart rate etc.)
  • brain activity

No given emotional recognition AI works the same and is used differently depending on the company. Data and analytics company Nielsen combines the analyses of those different metrics to attain an accuracy level of 77% and when tested with a self-report from respondents, the company claims the accuracy levels go up to 84%.

Some companies like Affectiva might measure a specific number of emotions such as anger, contempt, disgust, fear, joy, sadness and surprise. That company also says they gather their data in different contexts that include “challenging conditions” such as changes in lighting and background noise and variances in ethnicity, age, and gender.

How this Can Be Used

Emotional recognition has numerous potential applications for situations where extra “eyes” are needed:

  • Employees: for assessing candidates and how “engaged” employees are.
  • Education: similarly monitoring student engagement.
  • Product Development: accurately analyzing reactions to products can give clues towards making them better.
  • Customer Satisfaction: reading customers and adjusting customer service based on how they’re feeling.
  • Automotive: keeping a ride safe by monitoring the driver and acting when they’re distracted or incapacitated.
  • Health Care: assessing patients and prescribing solutions.
  • VR and Gaming: increasing immersion or enriching experiences based on how players or people wearing a VR headset feel.

The Pitfalls

There are a few issues with emotion recognition AI that could have consequences if they’re not addressed before they’re scaled up:

  • Bias: the technology is far from perfect and can be biased based on its training data as well as its detection ability. For one, AI has already been shown to have some of the same racial bias issues as other AI. Furthermore, it has to be said that expressions of emotions are not uniform across cultures and will, therefore, produce less predictable results in diverse groups of people.
  • Shaky Science: at least one study has refuted the idea that we can sense human emotions from facial movements. The science of relating how humans actually feel and what registers on the surface is complex and might not give clear-cut truths an AI should act on. For instance, we might frown when we’re sad but that’s only one of the possible reasons we would do so. Similarly, the study points out we might even scowl for reasons that aren’t even emotional.
  • Market Driven: Emotion recognition is estimated to be a $20 billion market and is sure to grow. In an article by Karen Hao for the MIT Technology Review, she explains how a single AI can emit as much carbon as five cars while it’s being trained, as well as how the scale of these training operations are only possible by those with significant resources. This makes it harder for academics and grad students to contribute to research, creating a gap in between them and industry researchers.

The Takeaway

The promise of using AI to read us lies in its blunt honesty: it tells us things about how we’re feeling that we won’t readily admit — not unlike a highly attuned and trained human that’s not afraid of calling us out. We also understand there are benefits to automating processes that free up resources elsewhere, as anyone who’s used Photoshop’s amazing Content Aware to edit photos could attest.

But perhaps the most worrying and unpredictable outcome of widespread AI-driven emotional recognition is the behavior it will incentivize in the future. If the technology reaches a saturation point in the market where it’s forced into every application possible (kind of like overenthusiasm for the Internet of Things), we’ll have those consequences to deal with as well.

Anyone who’s ever read an SEO-laden post from an aggregator understands what happens when rankings are at the whim of algorithms. The issue is for humans who don’t want to be weeded out by an algorithm might be forced to behave in unnatural and dishonest ways (say, forcing smiles) to “game” the system like a social media-savvy influencer might.

And that’s just people being judged by the AI: what about those who make decisions based on its suggestions? We’ve previously written about how outsourcing certain tasks to AI leaves us with nothing else but the toughest choices to fret over. There are certainly going to be many more when we also start to outsource our empathy at scale too.

 

March 9, 2020

Where Blockchain and the Creative Industry Meet

As a revolutionary technology, blockchain offers a lot of benefits for different industries, especially the creative economy. Sure it’s been synonymous with scams and extreme market volatility, but at its core are a few ideas that can benefit the creative industries.

The Primer on Blockchain

Blockchain is a form of disruptive technology that’s increasingly being adopted as a highly secure decentralized public database. A selling point is for individuals to make transactions without intermediaries, like say a bank. Each “block” in the chain has information such as the following:

  • Transactions: when, where and amount.
  • Participants: who transacted with who (somewhat anonymized with a digital signature similar to a username).
  • Hash: information that helps tell two blocks apart from each other. Even similar blocks each have a unique code called “hash” that’s created with special algorithms.

A single block may store thousands of transactions and that block exists as thousands or millions of copies (along with the entire chain of blocks) on different computers. Every time a transaction takes place, that transaction gets verified all those computers before it’s added as the most recent block. Any attempts to change the information about a transaction will also change the hash.

To change a single block, that means someone with the intent (say, a hacker) would have to change every block that comes after, which is nearly impossible except in the instance of unlikely 51% attacks.

How Blockchain Can Protect us

In an article from McKinsey & Company by Ryo Takahashi, he outlines five forces blockchain tech presents for the creative industry and how they can protect the work of creatives:

  • Smart Contracts: blockchain networks can host smart contracts to help artists manage digital rights and share revenue. Meaning an artist can automatically get paid when somebody consumes their work. We’ve seen this concept applied to digital art ownership and collection in one of our previous stories on R.A.R.E Art.
  • Transparent Transactions: because of the transparency and robust tamper-proofing of blockchain, one of the appeals is that transactions for creative work can be easily seen and validated, increasing confidence in the platform that employs the technology.
  • Micro-Monetizing: Takashi describes “micrometering,” which works by having the blockchain “record the precise components of the creative work that were used, defining the smallest consumable unit of creative content.” This could theoretically allow for the smallest units of creative work done (say, a few seconds of video footage) to be attributed, priced and compensated.
  • Dynamic Pricing: creatives could set prices themselves and control them regardless of the fluctuations in supply and demand that would normally affect creative content. Especially for stock assets, this could prevent a creative’s work being randomly put on discount if their platform wants to run a sale.
  • Reputation System: by being able to trace creative work, this means that credit is accumulated where credit is due, which could ensure better collaborative relationships and behavior.

Other areas where blockchain can come into play is authentication of luxury goods (knowing when and where a product was produced) as well as interesting concepts such as community governance and voting through proof-of-stake consensus algorithms.

Caveats

While there is a lot of promise for blockchain in empowering creatives in their industry, the technology in that context remains largely unknown, unadopted and mostly unpopular. This is because creative work and the living that people derive from it isn’t solely the domain of the workers who create the work, but of a network of several related fields like marketing and management.

These industries that tie-in would need to be on-board as well, lest an artist is able to say, securely protect and authenticate their work on the blockchain but have no means of distributing it and collecting payment from there.

As Takashi points out as well, the current technology prevents creative media from being directly stored on a blockchain (and there are likely plenty of creatives who wouldn’t want to do so either) and a lack of popular enough infrastructure to handle the micropayment features and would need modifications.

The Takeaway

While it might be a while before we see the next blockchain-based equivalent of Fiver for creative work, that doesn’t mean there isn’t a lot to be gained from employing blockchain. But of course, getting people interested in it is half the battle and then there’s of course, the longer debate of wondering which creatives want to submit their work and monetization of it to a robust and secure if potentially rigid system. While we’re interested in blockchain applications going forward, the adoption challenge has been very real for the industry.

January 13, 2020

Small audio big change — the impact of headphones and small speakers on our music

With listening experiences now emphasizing the small and the intimate, how does that factor in how music is produced now? We take a glimpse at how the prevalence of headphones and small speakers have changed music.

The Small Speaker Effect

Nowadays, we bring music with us everywhere we go and it’s not hard to find at least one friend at a gathering who brought a portable Bluetooth speaker. The ubiquity of not just these personal speakers but also the even smaller ones we find in laptops, tablets and smartphones means music production is catering to lower common denominators.

In a Quartz article by Dan Kopf, he notes some of the key technical impacts:

  • Drivers: Drivers are the key component of sound devices that emit audio. Quality varies, but it’s safe to not everyone is an audiophile and therefore uses cheaper headphones with lower quality drivers. Further, integrated speakers in a laptop aren’t usually that great simply because there’s no impetus to improve on them.
  • Highs/Lows: Because of the limitations of lower-quality speakers, they can’t accurately reproduce the treble and bass (high and low frequencies) that were mastered in the studio. The result is unpleasant and harsh sounds.
  • Reduced Dynamic Range: This means songs are mixed with less dynamic range, and that music production involves testing with smaller speakers such as on smartphones to see if the sound is still perceived as loud or present.

The Podcast Effect

The rise of the podcast as well as listening for therapeutic effect has emphasized privacy and a sense of intimacy around our listening habits, which of course, means a greater role for headphone and earbuds. In an article for The New Yorker by Amanda Petrusich, she points out some of the effects on music production, which again, cater to the needs of the listener:

  • Performance: he notes Selena Gomez and Billie Elish’s tendency to sing closer to the mic almost as if whispering (not unlike ASMR, cut less potentially creepy).
  • Lyrics: the cultural emphasis on the personal narrative means songs might be trying to make “one-on-one” connections between artist and listener. Petrusich notes the highly personal, introspective and confessionary lyrics of Drake and Kanye and wonders if headphone-centric listening encourages certain music genres.
  • Privacy: In a similar vein, she references former Talking Heads frontman David Byrne (who wrote How Music Works) on how certain music genres encourage headphone usage because well, no one necessarily wants to blast their overly emo, offensive or sensual music tastes for everyone to hear (and judge).

The Takeaway

Unsurprisingly, music as a medium is going through shifts directly impacted by the way we experience the world more privately, through smaller personal devices including smartphones. This isn’t too unlike the decision to stay at home and watch certain genres of movies while we’re only willing to go to movie theaters for big epics.

But aside from just being a matter of personal preferences (to which the music needs to adapt, as it always has), there are, of course, negatives that include the real physical dangers of constantly tuning out the rest of the world as well as early hearing loss for both listeners and the people mastering for headphones.

Yet, on the other hand, headphones could simply be a necessary adaptation in an increasingly noisy and distracted world and as mentioned before, can invite us to look inward (which isn’t always a bad thing).

Unless you’re an audiophile, you might not care if the sound of music dramatically shifts as long as it sounds fine and gives you what you need. But just like the risk of going through life wearing rose-colored glasses, there is something to be said about spending too much of your day with a drastically altered soundscape in your ears.

January 6, 2020

How smart airports are going to become more than transport hubs

Few of us would count the airport as a place to spend any longer than necessary. Airports are due for an overhaul that doesn’t just keep passengers there, but brings crowds of non-travelers all the same. We take a glimpse into the not-so-distant future where, both the appearance and role of airports will evolve to accommodate a world constantly on the move.

How they’ll evolve as airports

In an article for Skift by Sean O’Neill and Brian Sumers, they give a comprehensive overview of how airports will evolve in the near future. A key aspect of this transition will be underscored by several technological improvements that are meant to improve the passenger experience:

  • Passenger recognition: biometrics such as facial, iris recognition and passenger profiles might help to cut down on the importance of physical travel documents and the time it will take from arrival at the airport to boarding. Further, the use of Blockchain tech could make it easier to securely share information between parties.
  • Accessibility: Improved sensors allow airport admin to keep better track of the site-by-site situation and track passenger flows, improving their ability to allocate resources such as motorized carts and eventually self-driving electric wheelchairs, which have been tested at Tokyo Narita.
  • Bag processing: Computed tomography (CT) technology, will allow for improved baggage scans where passengers don’t need to remove liquids and laptops. London Heathrow has been trialing the technology since 2017 and is expected to install the equipment across its terminals by 2022. Paris’ Charles de Gaulle and Orly use the French postal service to send home prohibited items for passengers.
  • Green: Many airports are currently focused on boosting their energy efficiency, with particular attention paid to self-sufficiency. Beijing’s recently completed Daxing International Airport expects to derive more than 10% of its energy supply from renewable sources.

How they’ll become more

Of course, bringing airports into the future isn’t just motivated by the need to prepare for the increased number of passengers when air travel is slated to double by 2035, driven by the Asia-Pacific region. A lot of that change involves harnessing the economic potential of everyone that steps off an airplane entering a given airport.

Previously, airports had to play primarily to airlines, their principle tenants and some who might not always be about making things cheaper or easier for passengers. And of course, there’s the matter of keeping the airport structure itself maintained and profitable. These factors combine to make the idea of building and expanding airports to become destinations in and of themselves serious consideration, especially with all those people and potential dollars flowing through their gates.

Singapore’s chart-topping Changi airport is already leading the way with Jewel, a massive retail and entertainment complex that opened in October, 2019. Similar expansions such as at Hong Kong International Airport and Qatar’s Hamad International Airport are underway. And even if an airport doesn’t involve remodeling, a number in the U.S. are giving “terminal tourism” a shot, wherein they allow non-traveling visitors through security checkpoints to meet friends and family — or just walk around (and hopefully, buy something in the process.)

In short, these approaches to diversifying income sources means added stability for airports amid financial uncertainty for their airline tenants.

Next Step: Aerotropolises?

While we’ve only been talking about passengers, customers and similar visitors thus far, it’s important to mention that even with the advent of smart tech, airports will still take a ton of people to run them for the foreseeable future and these staff might start moving closer to their workplace.

Urban HUB talks about the transition from airports that were until now built on the periphery of cities or far from them to airports that are self-contained cities in themselves. In his book, Aerotropolis: The Way We’ll Live Next, John Kasarda envisions a future where the urban core of such cities would be the airport itself. That said, reconciling the needs of positive and profitable passenger experience and the obvious security requirements for such an unabated flow of people will take time.

Still we’re excited to see the possibility of airports evolving into gathering places that draw non-passengers to them. Given that these airports are bringing in people from many different places, we can expect a lot of opportunities for meaningful connections, exchanges, creative projects and new facets of culture yet to be formed. Give it a few years and dropping “anyone wanna grab dinner at the airport?” in the group chat might not seem so weird.

December 5, 2019

"Wonder Material" Graphene: Will it Change or Break the Game?

As the thinnest yet strongest material on Earth, graphene includes a plethora of other amazing properties. Widely considered a “wonder material,” how will it impact the physical world we know once it’s incorporated in everything from batteries and medical sensors to windows and condoms?

What is graphene?

Graphene is an allotrope (a given physical form) of the carbon element. You likely own or have encountered other allotropes of carbon such as the graphite in pencils, charcoal-cooked yakitori or the diamonds you might find in a set of grills. Yet, graphene is a “new arrival” that actually has been produced by accident for centuries through applications of graphite. It was observed in 1962 before being rediscovered, isolated and characterized in 2004. There are so many special properties packed into such a relatively “simple” composition, most notably:

  • Thin: At one atom “thick,” graphene is basically a super-thin sheet of linked carbon atoms (pictured above) and is currently the thinnest known material.
  • Strong: It’s also the strongest material known to exist proportionate to its thickness at 100 times stronger than the strongest steel.
  • Low Density: Again, compared to steel, the material is significantly less dense.
  • Conductive: Is an amazing heat conductor and great conductor for electricity too.
  • Permittivity: High permittivity means it stores electric potential energy in a magnetic field. Combined with graphene’s thinness and high surface area, this means the potential for better batteries.
  • Semi-Permeable: It’s still porous enough to allow water through while filtering other substances.

There are, of course, many other properties of graphene that we simply don’t science enough to be able to explain properly, which makes it suitable for a lot of yet undiscovered uses.

How could it be used?

  • Sex: Among other companies, the Bill and Melinda Gates Foundation has looked into using graphene to make even thinner but stronger condoms, offering a double whammy for both pleasure and protection.
  • Military and Law Enforcement: The material can absorb twice the amount of force as Kevlar, the current most commonly used material in bulletproof vests.
  • Fashion: The material’s properties make it a no-brainer for techwear (such as with Volleback’s Graphene Jacket), but we’re curious to see other fashion contexts where it could be used.
  • Medicine: The material’s thinness and conduciveness pave the way for wearable dermal sensors that help us discreetly track our health and fitness.
  • Sports: With such a high strength to weight ratio, the material has been used professionally as early as the 2018 Winter Olympics in Pyeongchang, South Korea, when it was used to construct a medal-winning sled.
  • Desalination: the fineness of the structure might be able to let water through but filter out salt, which could potentially revolutionize desalination and increasing freshwater supplies.
  • Hair Dye: While not as seemingly game-changing, graphene offers a comparable and non-toxic alternative to hair dyes, while giving hair anti-static and thermal resistance properties.

Are there drawbacks?

With so many potential uses for graphene, it’s not hard to see why it’s hailed as a wonder material, alongside eco-friendly favorites like fungal mycelium. For all this potential, though are there costs to this wonder material beyond simply the current financial constraints of producing it?

The risks surrounding graphene tend to start with their potential to harm us simply because our body doesn’t know what to do with such a “novel material.” For one, it’s brittle and being thin and strong makes it super sharp when fractured — sharp enough to pierce cell membranes and interfere with their function. As with initially helpful materials like fireproofing asbestos, upon further research, graphene has the potential to be toxic when inhaled in large quantities and the body can’t get rid of it.

Then there’s of course, the uncertainty of how the needs of scientific progress, commerce, creativity and industry will combine to produce unpredictable results — especially when it comes to drawing inspiration from nature, playing with genes and involving other living things aside from us.

For example, researchers added graphene to a spider’s drinking water, allowing it to produce silk strands that could hold the weight of a human. This makes it significantly stronger than BioSteel, developed in the early 2000s, which comes from goats genetically modified to produce silk from Orb Weaver spiders in their milk.

The Takeaway

We’re always excited to hear about new technology especially when that tech takes the form of a substance that can be applied to different contexts.

Graphene represents one of those materials we imagine when we think of a fantastical future where everything is functionally efficient to the point of otherworldliness. Note that this isn’t the same as how non-stick Teflon became the trendy material in cookware before it fell into disrepute for being toxic and heat-insulating silicon became popular.

Graphene incorporates so many desirable traits into one tiny material that maybe one day when it becomes easy enough to create (even say, in our own homes), there’s a high chance it will be quickly incorporated into just about everything. This can happen in a way where it can seamlessly integrate in both a functional and artful way.

November 25, 2019

You Have a Problem: Reframing Gear Acquisition Syndrome

We take a much-needed look at G.A.S., what causes it and how to pass it. We promise that this will likely be the last double entendre involving the word “gas” in this article.

What is G.A.S?

It’s not quite the common cold, but it can make us just as miserable. G.A.S stands for “gear acquisition syndrome” and is a strain of addictive retail therapy commonly associated with photographers. It involves purchasing gear at a rate that’s higher than needed and often distracts from the activity the gear’s intended for.

Yet this type of acquisitive behavior can easily affect non-photographers as well, such as people who work with audio. Rob Power and Matt Parker of Music Radar outline the 7 signs of G.A.S. which just as accurately represent phases of G.A.S. We’ve listed them here with examples from our own experience with G.A.S. (aggregated so we don’t single anyone out, Nate).

  1. Dissatisfaction: you’re dissatisfied with your current equipment.
  2. Desire: you see a new piece of equipment that will “complete” you.
  3. Research: you suffer hours of paralysis by analysis wading through options.
  4. Purchase: you break the deadlock with a rapid series of smaller purchases or a single big buy.
  5. Guilt: the gaping hole in the credit card or bank account leaves you pondering your decision, which may take you back to #3 to confirm if you made the right decision.
  6. Acceptance: you come to terms with what you’ve done, and might be filled with newfound and unbridled optimism toward your creative output in the vein of “Oh, The Places You’ll Go!”
  7. Relapse: your unresolved dissatisfaction quickly returns to attack your new creative implement, potentially as you discover that one missing feature that will completely upend your career.

What causes G.A.S?

Photographer, neuroscientist and writer Joshua Sariñana gives a highly detailed breakdown for G.A.S. and also explains the neurochemical mechanisms for how stressors trigger impulsive behavior and how purchases tap into our brain’s reward center. But of the many possible causes for those stressors, he proposes the most likely culprit in creatives: the fear of creativity itself.

Uncertainty: The creative process is already fraught with uncertainty and this uncertainty gives rise to fear of failure, criticism or even critique.

Catastrophizing: This is a common behavior where we always imagine the worst-case scenario. Combined with an existing cognitive bias against ourselves, this behavior repeats and small challenges seem insurmountable.

Avoidant Behavior: Like most living things, we tend to avoid discomforting things, even if that very thing is beneficial to us.

Buying Gear to Ease the Pain: Sariñana notes the potential for buying new gear to resemble drug abuse in the sense that we quickly acclimate to the ‘hit’ that comes with our new purchase, only to seek out bigger and better rewards.

How to get past gas

If you or someone you know has G.A.S., here are some ways to tackle the problem. Some involve dealing with the physical objects themselves while others focus on the mindset that leads to G.A.S.:

Realize you may have it: Even if you’re not a “gear head,” you might be acquiring services, plugins, memberships and subscriptions just as you would physical tools.

Validate yourself: Remind yourself that you are enough, even if your tools were to magically become primitive tomorrow. You have the talent to create something good with what essentials you have right now and the resourcefulness to improve on it later in the polishing phase.

Unplug: Our constant exposure to iconic, famous and professional-level content or simply content that we love constantly reminds us of how painfully inadequate our work is. Accompanying this, the democratization of creative tools means new markets to be targeted with marketing.

Be deliberate: Whether it’s finding references in the planning phases or only searching up in designated phases, if you find yourself stressing more about gear than creating, then it might be time to unplug from your media exposure.

Each item becomes a promise: Realize that each piece isn’t just an obligation to use it: you will have to maintain it and some items might require more purchases to keep them in good condition. If you have too many promises to keep, KonMari (Marie Kondo’s Shinto-based tidying methodology) your gear, digitally if you must: gather all your tools in one place (or start with one category of them if you’ve got that much) and notice how much you have. Keep the essentials, followed only by the ones that stir positive emotions. Take everything else out of play.

Get creative: this doesn’t just mean actually going out or staying in and doing the thing you bought the gear for. This means finding workarounds for limitations in the entire creative setup that includes gear, you personally and your situation. Consider using creative constraints to your advantage.

Borrow or rent: This might help you to let go of the idea that you need to have (as in own) a given tool to validate your creative title, and be comfortable with the fact you just need to use it for that project — especially true if you need to beef up your tiny mirrorless camera just so a client takes you seriously on that day. Likewise, borrowing or renting lets you “try before you buy.”

Co-buy or own: Or, if you’ve thought ahead and are sure you want something and will use it for the long-term: commit to making a few key purchases, either yourself or with someone, and then commit to using them for many years. Once you’ve committed, you’ll come to appreciate and acknowledge their limitations in conveying what you put into it. Assuming you’ve been using this gear this whole time, you’ll come to love it so much you won’t want to lend it out or replace it.

Make shit: Learn to be comfortable with making highly flawed and imperfect work with no intention of sharing it (or the possibility that nothing will come of it). The obsession with constantly making work for display to reinforce a given title may lead you to want to always “put your best foot forward” and buying new tools can add that polish.

The Takeaway

It’s okay. Everyone has suffered a bad case of G.A.S. or several relapses over the years (we’re pretty sure we’ve had a few). What matters is that you catch yourself early or you tweak your rate of acquisition to match your growing skill level or the actual demands of your jobs or career aspirations.

Regardless of what stage you might be in, for those who think you might be catching it, we highly recommend this detailed recount by “gear addict turned photography addict” Olivier Duong.

In writing this, we found a disproportionate amount of literature connecting G.A.S. to photographers and to a lesser degree, musicians. But this problem extends far beyond those two fields and even beyond physical “gear” as we know it to include subscriptions, plugins, services and software.

Whether you’re an artist that’s bought maybe a hundred Copic markers too many or a hobbyist sewer that’s filled their basement with more bolts of fabric than they have projects for, we’d like to hear any experiences with G.A.S. you’d like to share or suggestions on a broader term that captures this insecurity-driven acquisitive behavior in creatives.

October 25, 2019

The Coming Age of Fake Faces and Voices

As AI and machine learning become better at reproducing human likenesses and speech, we wonder how society and the creative industries will cope once the technology becomes widespread. We look at the possible ramifications of Deepfakes and the lesser-known Adobe speech engine VoCo, dubbed “Photoshop for the voice”.

Deepfakes and VoCo

By now, the Internet is no stranger to Deepfakes, whether it’s through hearing about its baser use cases or laughing our way through “re-cast” scenes from iconic films. The technology uses multiple images or footage of a person’s face to create an animated model that can be superimposed atop the original. But few seem to be aware of a similar and arguably, more powerful technology: fake voices. When it was announced in 2016, VoCo was touted as Adobe’s “Photoshop for voice” and while updates have been sparse since, other similar platforms have stepped in, such as LyreBird.

To get a feel for what Voco can do, check out this video where the technology was first debuted at Adobe MAX 2016. It shows the speech engine replicating the voice of actor and director Jordan Peele (who co-hosted) to make him say some funny but embarrassing things he has never said before — all using only 20 minutes of his recorded speech. Coincidentally, Peele also made a PSA where he provided the voice of a deepfaked President Obama in an effort to underscore a renewed need for media literacy in the age of Deepfakes.

Misinformation, Echo Chambers and Social Fallout

We’re continuing to keep a pulse on the potential for big data to amplify narratives, sway conversations and change culture for better or worse. Unfortunately, in the age of fake news, fact-checking is playing a losing game of cat and mouse with dubiously factual content or straight-up misinformation.

We’ve always used a combination of technology and creativity — well-intentioned or malicious — to shape reality, whether it means “cheating” shots to get a certain look on a budget or doctoring media for libelous reasons. Yet every generation has also had experts that keep us informed of how these things are done. The issue that’s most worrying is both that the tech is improving and we’re not listening anymore: even when shown evidence against their beliefs, people will dig in their heels and defend them.

Social media’s information silos and echo chambers threaten to become even worse once the average tech-savvy netizen is able to Deepfake and VoCo-lize with ease. When we lose the ability to trust our senses that much more (something we’ve already been losing as of late), it makes even the most engaged of us despondent to the state of the world and eager to just shut everything off.

The Potential Creative Outcomes

All said, it would be cynical to conclude that the only uses for these technologies are nefarious ones. “Hate the player, not the game,” as they say and we see a lot of potentials for Deepfakes and Voco to assist artists and creative workers.

For creatives providing their likenesses or voices and the people processing them, we see this new dynamic going one of several ways:

  • Quick Fixes: Not unlike content aware tools for Photoshop, Deepfakes and VoCo-like technology can help patch up more severe mistakes that can’t be done with conventional editing of the source material. This will evidently, lower the cost of reshoots and other production expenses as Adobe originally stated for VoCo.

As always, getting things done right the first time will always prevail, and for that there will be someone still thankful for not having to Deepfake or Voco correct hours of poorly captured footage, not to mention it still might not replace the real thing (which is why practical film effects still have an edge on CGI in many cases).

  • Updated Terms: We imagine there is a need to update contracts down the line that prevents someone from creating derivative content off of the images provided for a given project. For instance, an agency could create advertising materials out of video footage of us from say, a music video — so long as we’ve signed off on it.

But as the legal stance on deepfakes and similar content catches up, we could see the addition of key clauses that stipulate something to the effect of : “the client shall not create new material generated by AI taught using the artist’s likeness, voice or previous work.” Or if we allowed it, we could negotiate to be compensated depending on how much content is generated against a portion of our day rate (we’re going to assume the original voice of Siri, Susan Bennett was paid handsomely for her efforts).

  • Composite People: If Generated Photos’ 100,000 Faces project (which generated as many portraits through machine learning) has taught us anything, it’s that AI is getting better and better at generating realistic likenesses of people (albeit portraits of them). We can and should protect the rights to our unique selves and content generated from them, but what if we become less than a thousandth of a generated person in body or voice? Perhaps we could be entitled to a thousandth of the royalties, depending on the platform!

The Takeaway: A Re-Shuffling of the Creative Landscape

All in all, we still don’t know how much machine-generated personalities will change the creative landscape just yet, but we doubt it will be a clear-cut net positive or negative. Take our previous example of digital clothing collections made for the gram: in cases like these, the designer keeps their job, the pattern maker loses theirs, and the 3D modeler posing outfits onto customer photos gained a new one.

Even once we get to the stage where we’re using fully-posable photorealistic models of digital people using text-to-speech that nails personality, we predict the most-respected work and their creators will continue to pride themselves on employing, connecting with and working with real humans that can think for themselves, versus simply doing or saying what they’re programmed to do.

September 13, 2019

AI-assisted News and Its Future in the Attention Economy

Source:

We’re constantly hearing about new apps that aim to be the destination for our valuable attention. But can a news aggregator without a social component do the same? ByteDance’s TopBuzz shows potential as a contender, but goes beyond just bringing us the news we want.

What is TopBuzz?

TopBuzz is the English-language version of Toutiao, the flagship Chinese entertainment and news aggregator app made by ByteDance, which also owns TikTok. The app uses machine and deep learning algorithms to create personalized feeds of news and videos based on users’ interests.

  • User profiles: This is initially built on the app’s understanding of the user’s demographics (age, location, gender, and socio-economic status).
  • Content: The system uses natural language processing to determine if an article is trending, whether it’s long or short, and its timeliness (evergreen or time-bound).
  • Context: It also accounts for location-related data like geography, weather, local news, etc.

By the Numbers

  • 23M: Monthly active TopBuzz users in October 2, 2018 up from 1.8M in November, 2017
  • 36x: Increase in pageviews (34M) referred across Chartbeat publishers worldwide from 2017 to 2018.
  • 24 hours: The time it takes for Toutiao (and likely TopBuzz) to figure out a reader.
  • 200,000: Officially partnered publishers and independent creators including Reuters, CNN, New York Times and BuzzFeed. YouTube creators can also sync their channels too, while bloggers can publish directly or deliver it via RSS.

The Extra Mile

What makes Toutiao stand out among aggregators is that is doesn’t just serve content: it creates it too. During the 2016 Olympics, Toutiao debuted Xiaomingbot to create original news coverage, publishing stories on major events faster than traditional outlets—as in seconds after the event ended.

For an article about a tennis match between Andy Murray and Juan Martin Del Potro, the bot pulled from real time score updates from the Olympics organization, it took images from a recently acquired image-gathering company and it monitored live text commentary on the game.

During the Olympics, the bot published 450 stories with 500-1000 words that achieved read rates in terms of number of reads and impression on par with a human writer.

Bytedance used this same AI content creation in a bot that creates fake news to train the content filter for the app. However, it’s not clear at the moment if TopBuzz publishes AI-generated content in English as well.

The Potential for TopBuzz

While Facebook and Twitter also use machine learning to refine recommendations, they rely more heavily on a user’s social connections. TopBuzz is strictly a news aggregator with no social component similar to Feedly or Flipboard.

But what makes TopBuzz and Toutiao (and future would-be competitors) unique is how hard they’ve doubled down on using AI to win the content game. We’ve all experienced the 20-min or so Netflix sift we do for content recommendations based purely on our viewing history, but because Toutiao analyzes so many other factors, it’s reduced this lag in the consumption cycle to virtually nothing (once it’s figured out the user’s habits).

This combination of AI-fueled curation and creation could set the standard for apps to come — and there are likely to be more. ByteDance’s success with TikTok (which hit one billion users this year) was enough to prompt Facebook to make Lasso in response, and there are bound to be competitors after the same level of stickiness that Toutiao and TopBuzz have achieved.

We’ve always been hungry for knowledge, no question there. But as our attention continues to be commodified and audiences become pickier about what they consume, demand for high quality information (regardless of who or what created it) will increase too. The result? We get to “upgrade” to a cleaner albeit more addictive information diet we consume served buffet style. Users spend an average of 74 minutes on Toutiao. Will that eventually be the “sweet spot” for our news consumption?

But Not So Fast

In our experiences with TopBuzz, we don’t doubt the learning approach taken by the app. But the quality of publications often means that there’s a fair bit of clickbait from questionable outlets. A catchy headline that has us click in does not equate to a great experience. Naturally, most tech companies are notoriously opaque about their algorithms so we’re naturally a bit skeptical as to what defines a piece of content that you personally find compelling. It’s only been a few weeks but in six months, it’d be a worthwhile exploration to see how our experience has improved down the line.

August 13, 2019

Residuals for everyone—selling our data to teach AI

Source:

As more companies and organizations start relying on AI, more and more data will be needed to feed (and train) these powerful programs, but not all data is created equal. While some might be valuable, we might not be so ready to share it. But if there were a means of securing our data and earning for every time it was used, would we be more willing to part with it?

Medical researchers start dabbling in AI, but hit wall

Medical professionals are starting to tap into machine learning as a means of furthering their work, especially to find patterns that can help interpret their patient’s test results. Stanford ophthalmologist Robert Chang hopes to use eye scans to track conditions like glaucoma as part of this ongoing tech rush.

The problem, however, is that doctors and researchers have trouble gathering enough data from either their own patients or others because of the way those patients’ data is handled. Indeed, there’s a great deal of medical data that’s silo’d due to different policies on sharing patient information. This makes it challenging to share patient metrics between institutions, and subsequently to reach critical data mass.

Kara and Differential Privacy

Oasis Labs, founded by UC Berkeley professor Dawn Song, securely stores patient data using blockchain technology that encrypts and anonymizes the data while preventing it from being reverse engineered. It also provides monetary incentives to encourage participants, who could be compensated for each time their data is used to train artificial intelligence.

It’s not just the promise of money that’s making them more willing to submit their data. Song and Chang are trialling Kara, a system that uses differential privacy to ensure the AI gets trained on data (stored on Oasis’ platform), but the data remaining invisible to researchers.

Quality Matters

For the medical industry, having access to quality data will become increasingly important as the reliance on AI increases. Quality doesn’t mean individual data points (a grainy eye scan could throw off the machine’s learning) but rather the entire data set.

In order to prevent biases, which AI systems are prone to depending on what data sets they are fed, a system will need particular segments of the population to contribute data to round out its “training.” For this to happen, incentives will need to be carefully weighed and valued. Training a medical AI designed for a general population, for instance, would require samples from a diverse group of individuals including those with less common profiles. To incentivize participation, compensation might be higher for this group.

Otherwise, the designers of the AI could simply choose not to include certain groups as has happened in the past, thus creating a discriminatory AI. In this case, it’s less a matter of the machine that’s learning and more of the people initiating the teaching. That said, the resultant discriminatory AI has the very real power to change the course of peoples’ lives such as by filtering out their job applications.

Data ownership, Dividends and Industries

Despite these drawbacks, a combination of monetization and secure storage of personal data could signal the beginning of a new market where individuals can earn a fee for sharing data that wouldn’t have been shared in the first place; in essence, royalties for being ourselves, assuming we’re “valuable,” that is.

For the creative industry, the consensus is that for all its strides, AI has yet to evolve beyond being a very powerful assistant in the creative process. At present, it can create derivative work that resembles the art it’s been fed, but still lacks the ability to absorb what we know as inspiration. For example, IBM used its Watson AI to create a movie trailer for a horror movie after feeding it 100 trailers from films of that genre with each scene segmented and analyzed for visual and audio traits.

For now, the emergence of a data market doesn’t seem lucrative enough to birth a new class of workers (lest we all quit today to become walking data mines), but supposing the incentives were enticing and a company like Oasis could guarantee that data privacy was ensured, could we see more creators willing to give up some of their work? Perhaps even unpublished work that would never been seen? Would quick file uploads coupled with a hassle-free “for machine learning only” license mean an influx of would-be creators hoping to make data dividends off work they could license elsewhere too?

On one hand, it would provide a way for creatives to earn residuals off their work given that AI needs thousands if not millions of samples and other sources (such as websites for stock creative assets) might not be as lucrative. That said, just as different data sets are needed for different purposes, we might see the emergence of a metrics-based classification system to objectively grade subjective work and assign value to it.

And if those works can be graded, so too can their creators with all the opportunities that follow a “quality data” distinction. Maybe one day when a program like Watson reaches celebrity artist status, we can brag to our peers, “yeah, I taught it that.”

Play Pause
Context—
Loading...