We’ll start with a moment of silence. RIP Dan Kaminski, master hacker, teacher, FOO, and a great showman who could make some of the more arcane corners of security fun. And one of the few people who could legitimately claim to have saved the internet.
- Snorkel is making progress automating the labeling process for training data. They are building no-code tools to help subject matter experts direct the training process, and then using AI to label training data at scale.
- There’s lots of news about regulating AI. Perhaps the most important is a blog post from the US Federal Trade Commission saying that it will consider the sale of racially biased algorithms as an unfair or deceptive business practice.
- AI and computer vision can be used to aid environmental monitoring and enforce environmental regulation–specifically, to detect businesses that are emitting pollutants.
- Facebook has made some significant progress in solving the “cocktail party problem”: how do you separate voices in a crowd sufficiently so that they can be used as input to a speech recognition system?
- The next step in AI may be Geoff Hinton’s GLOM. It’s currently just an idea about giving neural networks the ability to work with hierarchies of objects, for example the concepts of “part” and “whole,” in the hope of getting closer to monitoring human perception.
- Twitter has announced an initiative on responsible machine learning that intends to investigate the “potential and harmful effects of algorithmic decisions.”
- How do we go beyond statistical correlation to build causality into AI? This article about causal models for machine learning discusses why it’s difficult, and what can be done about it.
- Iron man? The price of robotic exoskeletons for humans is still high, but may be dropping fast. These exoskeletons will assist humans in tasks that require strength, improved vision, and other capabilities.
- The Google Street View image of your house can been used to predict your risk of a car accident. This raises important questions about ethics, fairness, and the abuse of data.
- When deep fakes become cheap fakes: Deep fakes proliferated during the Amazon unionization campaign in Georgia, many under the name of Amazon Ambassadors. These are apparently “fake fakes,” parodies of an earlier Amazon attempt to use fake media to bolster its image. But the question remains: what happens when “deep fakes” are also the cheapest way to influence social media?
- DeepFakeHop is a new technique for detecting deep fakes, using a new neural network architecture called Successive Subspace Learning.
- One of the biggest problems in AI is building systems that can respond correctly to challenging, unexpected situations. Changing the rules of a game may be a way of “teaching” AI to respond to new and unexpected situations and make novelty a “first class citizen.”
- A robot developed at Berkeley has taught itself to walk using reinforcement learning. Two levels of simulation were used before the robot was allowed to walk in the real world. (Boston Dynamics has not said how their robots are trained, but they are assumed to be hand-tuned.)
- Work on data quality is more important to getting good results from AI than work on models–but everyone wants to do the model work. There is evidence that AI is a lot better than we think, but its accuracy is compromised by errors in the public data sets widely used for training.
- Moxie Marlinspike has found a remote code execution vulnerability in Cellebrite, a commercial device used by police forces and others to break encryption on cell phone apps like Signal. This exploit can be triggered by files installed in the app itself, possibly rendering Cellebrite evidence inadmissible in court.
- What happens when AI systems start hacking? This is Bruce Schneier’s scary thought. AI is now part of the attacker’s toolkit, and responsible for new attacks that evade traditional defenses. This is the end of traditional, signature-based approaches to security.
- Confidential computing combines homomorphic encryption with specialized cryptographic computation engines to keep data encrypted while it is being used. “Traditional” cryptography only protects data in storage or in transit; to use data in computation, it must be decrypted.
- Secure access service edge could be no more than hype-ware, but it is touted as a security paradigm for edge computing that combines firewalls, security brokers, and zero-trust computing over wide-area networks.
- A supply chain attack attempted to place a backdoor into PHP. Fortunately, it was detected during a code review prior to release. One result is that PHP is outsourcing their git server to GitHub. They are making this change to protect against attacks on the source code, and they’re realizing that GitHub provides better protection than they can. “Maintaining our own git infrastructure is an unnecessary security risk”–that’s an argument we’ve made in favor of cloud computing.
- “Researchers” from the University of Minnesota have deliberately tried to insert vulnerabilities into the Linux kernel. The Linux kernel team has banned all contributions from the university.
- Entanglement-based quantum networks solve a fundamental problem: how do you move qbit state from one system to another, given that reading a qbit causes wave function collapse? If this works, it’s a major breakthrough.
- IBM Quantum Composer is a low-code tool for programming quantum computers. Could low- and no-code language be the only effective way to program quantum computers? Could they provide the insight and abstractions we need in a way that “coded” languages can’t?
- A Software Bill of Materials is a tool for knowing your dependencies, crucial in defending against supply chain attacks.
- Logica is a new programming language from Google that is designed for working with data. It was designed for Google’s BigQuery, but it compiles to SQL and has experimental support for SQLite and PostgreSQL.
- An iPhone app that teaches you to play guitar isn’t unique. But Uberchord is an app that teaches you to play guitar that has an API. The API allows searching for chords, sharing and retrieving songs, and embedding chords on your website.
- The Supreme Court has ruled that implementing an API is “fair use,” giving Google a victory in a protracted copyright infringement case surrounding the use of Java APIs in Android.
- Still picking up the pieces of social networking: Twitter, context collapse, and how trending topics can ruin your day. You don’t want to be the inadvertent “star of twitter.”
- Beauty filters and selfie culture change the way girls see themselves in ways that are neither surprising nor healthy. Body shaming goes to a new level when you live in a permanent reality distortion field.
- The Signal app, probably the most widely used app for truly private communication, has wrapped itself in controversy by incorporating a peer-to-peer payments feature build around a new cryptocurrency.
- Twitch will consider behavior on other social platforms when banning users.
- Bitcoin has been very much in the news–though not for any technology. We’re beginning to see connections made between the Bitcoin economy and the real-world economy; that could be significant.
- A different spin on salary differences between men and women: companies are paying a premium for male overconfidence. Paying for overconfidence is costing billions.
- How do you teach kids about virtual money? Nickels, dimes, and quarters work. Monetizing children by issuing debit cards for them doesn’t seem like a good idea.
- The Craig Venter Institute, NIST, and MIT have produced an artificial cell that divides normally. It is not the first artificial cell, nor the smallest artificial genome. But unlike previous efforts, it is capable of reproduction.
- While enabling a monkey to play Pong using brain control isn’t new in itself, the sensors that Neuralink implanted in the monkey’s brain are wireless.