Alphabet’s Waymo vs Uber’s Otto: fight!
Google came out swinging against Uber yesterday, when its self-driving car division Waymo filed a lawsuit alleging that Otto and Uber are “misappropriating Waymo trade secrets and infringing our patents” (read Waymo’s Medium post). This is the last step of an escalating conflict between the two companies (MIT Tech Review): Google initially invested $250 million in Uber back in 2013 but also acquired Waze which is now moving towards ride-sharing at a much lower cost than Uber. Waymo accuses the former head of its self-driving efforts of a “positively Snowdenesque” behavior: downloading and attempting to conceal the download of no less than 14,000 files and setting up a self-driving truck startup, Otto, using a LiDAR technology (the bubble on top of the car that uses lasers to “see” the outside world) that is strikingly similar to Google’s. Otto was bought by Uber 6 months after it launched, and its founder appointed in charge of Uber’s self-driving efforts. (WIRED)
Trolls vs AI, step 2
Alphabet’s Jigsaw opened up the API to its machine-learning algorithm rating comments on a scale of 0 to 100% toxic, the latest step in its war on trolls. The algorithm was announced in the Fall, and several major news outlets are already experimenting with it, but by opening up the API, Jigsaw is enabling anyone to tap into the resource of a trained AI to flag potentially toxic comments, and even “integrate it into their website to show toxicity ratings to commenters even as they’re typing” (WIRED). A couple of thoughts: 1) I can see how a trolling competition could leverage this percentage score to produce even more harmful comments (“you’re only 45% toxic, come on, you can do better than that”) 2) Flagging comments for human review is probably a good way to go, but how about websites that just don’t have the capacity for human reviewing? (Jared Cohen’s answer is: their current default is censoring, we provide them a way forward). 3) Will the AI toxicity standard be adapted in the (near) future to national standards of what’s acceptable and what’s not?
The world needs to stop using SHA-1
Most of you have no idea what a hash function is but they are absolutely essential to part of how the internet works: a hash function takes a document and produces a unique hash (a label) to describe it in a shorter way. Like when you go to a library and you look for a book using its code rather than its title, except the code is more complicated than a library. We use that for many things (storage like in a library, but also authenticating documents: is this really the book I want to read?). Crucially, we rely on the fact that this hash is unique (no two documents can hash to the same value), although we’ve known for a long time that this was an unrealistic assumption. Yesterday, researchers cracked in practice SHA-1, a once-popular hash function that “many applications still rely on” (SHAttered.io, or see Google Security Blog for more details).
Have a women-march-on-Wikipedia Friday!