by hendler on 12/5/23, 2:09 PM with 328 comments
by somenameforme on 12/5/23, 3:45 PM
And when one looks back at the past we've banned things people would never have imagined bannable. Make it a crime to grow a plant in the privacy of your own home and then consume that plant? Sure, why not? Make it a crime for a business to have the wrong opinion when it comes to who they want to serve or hire? Sure, why not?
Going the nuclear route and making the collection of data on individuals, aggregated or otherwise, illegal would hardly be some major leap of reach of jurisprudence. The problem is not that the technology exists, but that there is 0 political interest in curtailing it, and we've a 'democracy' where the will of the people matters very little in terms of what legislation gets passed.
by klik99 on 12/5/23, 3:57 PM
by thesuperbigfrog on 12/5/23, 3:16 PM
Lots of cultures have the concept of a "guardian angel" or "ancestral spirits" that watch over the lives of their descendants.
In the not-so-distant technofedualist future you'll have a "personal assistant bot" provided by a large corporation that will "help" you by answering questions, gathering information, and doing tasks that you give it. However, be forewarned that your "personal assistant bot" is no guardian angel and only serves you in ways that its corporate creator wants it to.
Its true job is to collect information about you, inform on you, and give you curated and occasionally "sponsored" information that high bidders want you to see. They serve their creators--not you. Don't be fooled.
by bonyt on 12/5/23, 3:31 PM
Many of our criminal laws are written with the implicit assumption that it takes resources to investigate and prosecute a crime, and that this will limit the effective scope of the law. Prosecutorial discretion.
Putting aside for the moment the (very serious) injustice that comes with the inequitable use of prosecutorial discretion, let's imagine a world without this discretion. Perhaps it's contrived, but one could imagine AI making it at least possible. Even by the book as it's currently written, is it a better world?
Suddenly, an AI monitoring public activity can trigger an AI investigator to draft a warrant to be signed by an AI judge to approve the warrant and draft an opinion. One could argue that due process is had, and a record is available to the public showing that there was in fact probable cause for further investigation or even arrest.
Maybe a ticket just pops out of the wall like in Demolition Man, but listing in writing clearly articulated probable cause and well-presented evidence.
Investigating and prosecuting silly examples suddenly becomes possible. A CCTV camera catches someone finding a $20 bill on the street, and finds that they didn't report it on their tax return. The myriad of ways one can violate the CFAA. A passing mention of music piracy on a subway train can become an investigation and prosecution. Dilated pupils and a staggering gait could support a drug investigation. Heck, jaywalking tickets given out as though by speed camera. Who cares if the juice wasn't worth the squeeze when it's a cheap AI doing the squeezing.
Is this a better world, or have we just all subjected ourselves to a life hyper-analyzed by a motivated prosecutor.
Turning back in the general direction of reality, I'm aware that arguing "if we enforced all of our laws, it would be chaos" is more an indictment of our criminal justice system than it is of AI. I think that AI gives us a lens to imagine a world where we actually do that, however. And maybe thinking about it will help us build a better system.
by miki123211 on 12/5/23, 9:44 PM
It will soon be possible to create a dating app where chatting is free, but figuring out a place to meet or exchanging contact details requires you to pay up, in a way that 99% of people won't know how to bypass, especially if repeated bypassing attempts result in a ban. Same goes for apps like Airbnb or eBay, which will be able to prevent people from using them as listing sites and conducting their transactions off-platform to avoid fees.
The social media implications are even more worrying, it will be possible to check every post, comment, message, photo or video and immediately delist it if it promotes certain views (like the lab leak theory), no matter how indirect these mentions are. Parental control software will have a field day with this, basically redefining helicopter parenting.
by jillesvangurp on 12/5/23, 3:28 PM
Related to this is the notion of ubiquitous surveillance. Where basically anywhere you go, there is going to be active surveillance literally everywhere and AIs filtering and digging through that constantly. That's already the case in a lot of our public spaces in densely populated areas. But imagine that just being everywhere and virtually inescapable (barring Faraday cages, tin foil hats, etc.).
The most feasible way to limit the downsides of that kind of surveillance is a combination of legislation regulating this, and counter surveillance to ensure any would be illegal surveillance has a high chance of being observed and thus punished. You do this by making the technology widely available but regulating its use. People would still try to get around it but the price of getting caught abusing the tech would be jail. And with surveillance being inescapable, you'd never be certain nobody is watching you misbehaving. The beauty of mass, multilateral surveillance is that you wouldn't ever be sure nobody is not watching you abuse your privileges.
Of course, the reality of states adopting this and monopolizing this is already resulting in 1984 like scenarios in e.g. China, North Korea, and elsewhere.
by HackerThemAll on 12/5/23, 2:55 PM
by brunoTbear on 12/5/23, 3:49 PM
Am Google employee, not in hardware.
by dkjaudyeqooe on 12/5/23, 4:54 PM
Now freedom to develop AI software doesn't mean freedom to use it however you please and its use should be regulated, in particular to protect individuals from things like this. But of course people cannot be trusted, so you need to be able to deploy your own countermeasures.
by nojvek on 12/5/23, 7:26 PM
AI enables extracting all sorts of behavioral data across decades timespan for everyone.
The devils argument is in a world where the data is not used for nefarious purposes and only to prosecute crime as passed by governments, it leads to a society where no one is above the law and equal treatment for all.
However that seldom goes well since humans who control the system definitely want an upper edge.
by bmislav on 12/5/23, 4:30 PM
by zxt_tzx on 12/5/23, 3:41 PM
However, a more recent trend is companies that sell technologies to the state directly. For every reputable one like Palantir or Anduril or even NSO Group, there are probably many more funded in the shadows by In-Q-Tel, not to mention the Chinese companies doing the same in a parallel geopolitcal orbit. Insofar as AI is a sustaining innovation that benefits incumbents, the state is surely the biggest incumbent of all.
Finally, an under-appreciated point is Apple's App Tracking Transparency policy, which forbids third-party data sharing, naturally makes first-party data collection more valuable. So even if Meta or Google might suffer in the short-term, their positions are ultimately entrenched on a relative basis.
by miyuru on 12/5/23, 3:03 PM
Strange and scary how fast the world develops new technology.
by 127361 on 12/5/23, 3:22 PM
That is in addition to generating our own energy off grid (so no smart meter data to monitor), thanks to the low cost of solar panels as well.
Bye bye Big Brother.
by troupo on 12/5/23, 3:06 PM
by renegat0x0 on 12/5/23, 3:39 PM
- they started spying on user's gmail
- there was blowback, they reverted
- after some time they introduced "smart features", with ads again
Link https://www.askvg.com/gmail-showing-ads-inside-email-message...
I do not even want to check if "smart features" are opt-in, or opt-out.
by sarks_nz on 12/5/23, 5:24 PM
https://nickbostrom.com/papers/vulnerable.pdf
It's disturbing, but also hard (for me) to refute.
by sambull on 12/5/23, 2:48 PM
by yonaguska on 12/5/23, 3:08 PM
https://www.dhs.gov/sites/default/files/2023-09/23_0913_mgmt...
Fortunately the DHS has put together an expert team of non-partisan, honest, Americans to spearhead the effort to protect our democracy. Thank you James Clapper and John Brennan- for stepping up to the task.
https://www.dhs.gov/news/2023/09/19/secretary-mayorkas-annou...
And just in time for election season in the US AI is going to be employed to fight disinformation- for our protection of course. https://www.thedefensepost.com/2023/08/31/ussocom-ai-disinfo...
by 1-6 on 12/5/23, 3:25 PM
AI can be a deployed ‘agent’ that does all the collection and finally send scrubbed info to its mothership.
by px43 on 12/5/23, 3:12 PM
Using common off the shelf, open source, heavily audited tools, it's trivial today, even for a non-technical 10 year old, to create a new identity and collaborate with anyone anywhere in the world. They can do research, get paid, make payments, and contribute to private communities in such a way that no existing surveillance infrastructure can positively link that identity to their government identity. Every day privacy tech is improving and adding new capabilities.
by intended on 12/5/23, 5:27 PM
I tried exactly this. Watched 4 talks from a seminar, got them transcribed, and used ChatGPT to summarize this.
Did 3 perfectly fine, and for the 4th it changed the speaker from mild mannered professor into VC investing superstar, with enough successes under his belt to not care.
How do you verify your summary is correct? If your false positive rate is 25% - 33%, thats a LOT of rework. 1 out of 3.
by TheLoafOfBread on 12/5/23, 4:01 PM
by erikerikson on 12/5/23, 8:11 PM
The question I think is how too navigate and what consequences will follow. We could use these capabilities to enslave but we could also use them to free and empower.
Scams rely on scale and the ineffective scaling social mechanisms to achieve profit. Imagine if the first identification of a scam informed every potential mark to which the scam began to be applied. Don't forget to concern yourself with false positives too, of course.
The injustice of being unable to take action in disputes due to a lack of evidence would evaporate. Massive privacy, consent, and security risks and issues result so will we be ready to properly protect and honor people and their freedoms?
At the end of this path may lay more efficient markets; increased capital flows and volumes; and a more fair, just, equitable, and maximized world more filled with joy, love, and happiness. There are other worse options of course.
by __jambo on 12/5/23, 7:23 PM
The flip side to this is the government had power because these activities required enormous resources. Perhaps it will go the other direction, if there is less of a moat other players can enter. Eg all it takes to make a state is a bunch of cheap drones and the latest government bot according to your philosophy.
Maybe it means government will massively shrink in personelle? Maybe we can have a completely open source ai government/legal system. Lawyers kind of suck ethically anyway, so maybe it would be better? With low barrier to entry, we can rapidly prototype such governments and trial them on smaller populations like iceland. Such utopias will be so good everyone will move there.
They still have to have physical prisons, if everyone is in prison this will be silly, but I suppose they can fine everyone, not so different from lowering wages which they already do.
by mmh0000 on 12/5/23, 4:40 PM
by mullingitover on 12/5/23, 8:16 PM
However, Americans expect that the law is enforced vigorously upon other people, especially people they hate. If AI enabled immediate immigration enforcement on undocumented migrants, large portions of the population would injure themselves running to the voting booth to have it added to the Constitution.
It's the whole expectation that for my group the law protects but does not bind, and for others it binds but does not protect.
by willmadden on 12/5/23, 6:22 PM
What is the market for this short term?
I think this could greatly curtail government corruption and serve as a stepping stone to AI government. It's also a cool and disruptive startup idea.
by sebastianconcpt on 12/6/23, 1:34 PM
by thesz on 12/5/23, 11:54 PM
What other humans do to cicumvent that? Yes, they found a way to alternate direction, in case of London cockney, by using rhymes [1].
[1] https://www.theguardian.com/education/2014/jun/09/guide-to-c...
If you need to fool your AI of choice, rhyme the concepts!
For a demo, ask your AI of choice about "Does basin of gravy likes satin and silk?" (decode yourself)
The article above is from 2014 and is hardly is used when I asked questions using Cockney parlance.
You are welcome. ;)
by I_am_tiberius on 12/5/23, 8:07 PM
by darklycan51 on 12/5/23, 3:13 PM
Every service has access to the IPs you've used to log on, most services require an email, phone number some debit/credit cards and or similar personal info. Link that with government databases on addresses/real names/ISP customers and you basically can get most peoples accounts, on virtually any service they use.
We then also have things such as the patriot act in effect, the government could if they wanted run a system to do this automatically, where every message is scanned by an AI that catalogues them.
I have believed for some time now that we are extremely close to a complete dystopia.
by FrustratedMonky on 12/5/23, 8:53 PM
Russia could do surveillance, but was limited by manpower.
Now AI solves this, there can be an AI bot dedicated to each individual.
Wasn't there another article on HN just day, that Car Makers, Phone, Health monitors all can now aggregate data to know 'your mood' when in an accident? To know where you are going, how you are feeling?
This is the real danger with AI. Even current technology is good enough for this kind of surveillance.
by Arson9416 on 12/5/23, 3:52 PM
People don't know what they're creating. Maybe it's time it bites them.
by elric on 12/5/23, 10:23 PM
Heck, even if they did care, there's nothing they can realistically do about it. The genie's out of the bottle.
by RandomLensman on 12/5/23, 2:59 PM
by EVa5I7bHFq9mnYK on 12/5/23, 5:59 PM
What's the best docker image for that, simple in configuration?
by wseqyrku on 12/5/23, 8:35 PM
by righthand on 12/5/23, 4:38 PM
by pockmockchock on 12/5/23, 4:26 PM
by naveen99 on 12/5/23, 2:39 PM
by fsflover on 12/5/23, 5:07 PM
by blueyes on 12/5/23, 6:26 PM
by _Nat_ on 12/5/23, 3:26 PM
I mean, even if we pass laws to offer more protections, as computation gets cheaper, it ought to become easier-and-easier for anyone to start a mass-spying operation -- even by just buying a bunch of cheap sensors and doing all of the work on their personal-computer.
A decent near-term goal might be figuring out what sorts of information we can't reasonably expect privacy on (because someone's going to get it) and then ensuring that access to such data is generally available. Because if the privacy's going to be lost anyway, then may as well try to address the next concern, i.e. disparities in data-access dividing society.
by uticus on 12/5/23, 9:22 PM
- The money trail: "Their true customers—their advertisers—will demand it."
- The current state of affairs: "Surveillance has become the business model of the internet..."
- The fact that not participating, or opting-out, still yields informational value, if not even more so: "Find me all the pairs of phones that were moving toward each other, turned themselves off..."
This isn't a technological problem. Technology always precedes the morals and piggybacks on the fuzzy ideas that haven't yet developed into concrete, well-taught axioms. It is a problem about how our society approaches ideals. Ideals, not ideas. What do we value? What do we love?
If we love perceived security more than responsibility, we will give up freedoms. And gladly. If we love ourselves more than future generations, we will make short-sighted decisions and pat ourselves on the back for our efficiency in rewarding ourselves. If we love ourselves more than others, we won't even care much about social concerns. We'll fail to notice anything that doesn't move the needle against my comfort much.
It's more understandable to me than ever how recent human horrors - genocides, repressive regimes, all of it - came about to be. It's because I'm a very selfish person and I am surrounded by selfish people. Mass spying is a symptom - not much of a cause - of the human condition.
by moose44 on 12/5/23, 3:58 PM
by jacobwilliamroy on 12/5/23, 3:04 PM
by forward1 on 12/5/23, 6:08 PM
by pier25 on 12/5/23, 5:11 PM
by graphe on 12/5/23, 3:45 PM
Computers create and organize large amounts of information. This is useful for large organizations and unempowering to the average person. Any technology with these traits are harmful to individuals.
by ysofunny on 12/5/23, 4:48 PM
???
by barelyauser on 12/5/23, 10:31 PM
Concluding remarks. As man succeeded in creating high mechanical precision from the chaotic natural environment, he will succeed in creating a superior artificial entity. This entity shall "spy" (better described as "care" for) every human being, maximizing our happiness.
by CrzyLngPwd on 12/5/23, 4:00 PM
by aaroninsf on 12/5/23, 11:38 PM
No. The US has always had political problems wrt surveillance; and many places have had much worse ones.
Now, all of us can anticipate a very different, all but inevitable, and very much worse political problem.
AI is a force multiplier and an accelerant.
As such it is a problem, an obvious one, and a very very big one. This is just one of many ways in which force multiplication and acceleration, so pursued, and so lauded, in so many domains, may work their magic on preexisting social and political evils. And give us brand new ones, per Larkin.
by CrzyLngPwd on 12/5/23, 3:45 PM
by indigo0086 on 12/5/23, 5:14 PM
AI will be a useful and world changing innovation which is why FUD rag articles like this will become more prevalent until it's total adoption, even by the article writer themselves
by gumballindie on 12/5/23, 3:06 PM
by godelski on 12/5/23, 7:13 PM
What can AI/ML do __today__?
We have lots of ways to track people around a building or city. The challenge is to do these tasks through multi-camera systems. This includes things like people tracking (person with random ID but consistent across cameras), face identification (more specific representation that is independent of clothing, which usually identifies the former), gait tracking (how one walks), device tracking (based on bluetooth, wifi, and cellular). There is a lot of mixed success with these tools but I'll let you know some part that should concern you: right now these are mostly ResNet50 models, datasets are small, and they are not using advanced training techniques. That is changing. There are legal issues and datasets are becoming proprietary but the size and frequency of gathering data is growing.
I'm not going to talk about social media because the metadata problem is an already well discussed one and you all have already made your decisions and we've witnessed the results of those decisions. I'm also not going to talk about China, the most surveilled country in the world, the UK, or any of that for similar reasons. We'll keep talking in general, that is invariant to country.
What I will talk about is that modern ML has greatly accelerated the data gathering sector. Your threat models have changed from governments rushing to gather all the data that they can, to big companies joining the game, to now small mom and pop shops doing so. I __really__ implore you all to look at what's in that dataset[0]. There's 5B items, this tool helps retrieve based on CLIP embeddings. You might think "oh yes, Google can already do this" but the difference is that you can't download Google. Google does not give you 16.5TB of clip filtered image,text, & metadata. Or look into the RedPajama dataset[1] which has >30T tokens and 5TB of storage. With 32k tokens being about 50 pages, that's about 47 billion pages. That is, a stack of paper 5000km tall, reaching 5x the height of the ISS and is bigger than the diameter of the moon. I know we all understand that there's big data collection, but do you honestly understand how big these numbers are? I wouldn't even claim to because I cannot accurately conceptualize the size of the moon nor the distance to the ISS. They just roll into the "big" bin in my brain.
Today, these systems can track you with decent accuracy even if you use basic obscurification techniques like glasses, hats, or even a surgical mask. Today we can track you not just by image, but how you walk, and can with moderate success do this through walls (meaning no camera to see if you want to know you're being tracked). Today, these systems can de-anonymize you through unique text patterns that you use (see Enron dataset, but scale). Today, these machines can uncanny valley replicas of your speech and text. Today we can make images of people that are convincingly real. Today, these tools aren't exclusive to governments or trillion dollar corporations, but available to any person that is willing to spend a few thousand dollars on compute.
I don't want to paint this as a picture of doom and gloom. These tools are amazing and have the potential to do extraordinary good, at levels that would be unimaginable only a few decades ago. Even many of these tools that can invade your privacy are benefits in some ways, but just need to consider context. You cannot build a post scarce society when you require humans to monitor all stores.
But like Uncle Ben says, with great power comes great responsibility. A technology that has the capacity to do tremendous good also has the power to do tremendous horrors.
The choice is ours and the latter prevails when we are not open. We must ever push for these tools to be used for good, because with them we can truly do amazing things. We do not need AGI to create a post scarce world and I have no doubt that were this to become our primary goal, we could easily reach it within our lifetime without becoming a Sci-Fi dystopia and while tackling existential issues such as climate. To poke the bear a little, I'd argue that if your country wants to show dominance and superiority on the global stage, it is not done so through military power but technology. You will win the culture wars of all culture wars and whoever creates the post scarce world will be a country that will never be forgotten by time. Lift a billion people out of poverty? Try lifting 8 billion not just out of poverty, but into the lower middle class, where no child dreams of being hungry. That is something humans will never forget. So maybe this should be our cold war, not the one in the Pacific. If you're so great, truly, truly show me how superior your country/technology/people are. This is a battle that can be won by anyone at this point, not just China vs the US, but even any European power has the chance to win.
by boringg on 12/5/23, 6:50 PM
by blondie9x on 12/5/23, 5:19 PM
by Spivak on 12/5/23, 2:52 PM
But it was and still is a nothingburger and this will be the same because it doesn't enable anything except "better search." We've had comparable abilities for a decade now. Yes LLMs are better but semantic search and NLP have been around a while and the world didn't end.
All the examples of what an LLM could do are just querying tracking databases. Uncovering organizational structure is just a social graph, correlating purchases is just querying purchase databases, listing license plates is just querying the camera systems. You don't need an LLM for any of this.
by Nilrem404 on 12/5/23, 3:14 PM
Seriously nothing new or shocking about this piece. Spying is spying. Surveillance is surveillance. If you've watched the news at all in the past 2 decades, you know this is happening.
Anyone who assumes that any new technology isn't going to be used to target the masses by increasingly massive and powerful authoritarian regimes is woefully naive.
Another post stating what we all already know isn't helping or fostering any meaningful conversation. It will just be rehashes. Let me skip to the end here for you:
There is nothing we can do about it. Nothing will change for the better.
Go make a coffee or tea