from Hacker News

Canada plans to regulate search and social media use of AI

by markdog12 on 10/10/23, 2:18 AM with 82 comments

  • by Zambyte on 10/10/23, 4:20 AM

    > Failure to abide by the regulations can result in penalties as high as 3% of gross global revenues

    Assuming all of Canada has internet access, Canada makes up less than 1% of people with internet. I wonder if companies will respond by simply blocking Canadians, as it would be cheaper, and encourage them to repeal this. Not that I want that at all, but it seems like a possibility.

  • by drdaeman on 10/10/23, 3:49 AM

    UPD: It seems that the bill itself is not about the AI, but there are some undisclosed amendments to the bill that aren't yet public and that are said to target AI specifically. My bad. On the bright side - my wish to see a legal definition of an "AI" may still be granted :-)

    --- Original comment below ---

    Whem. Either I'm misunderstanding something (I've only quickly skimmed through the bill, can't say I understood any much), or the title is clickbait-ish, and this is:

    1) Not about any AI at all (and here I was hoping to see a legal definition of an "AI", haha). The definition of "automated decision system" seem to cover just about anything from some fancy ML stuff to your basic fortune(6) program. Basically, if a machine rather than a human (or something else that is not a machine) had picked something, then it's an "automated decision system". At the very least, I have not found any sophistication requirements (but I could've missed it).

    2) Only seem to apply when it's about use of personally identifying information. If I got it right, then if you track someone and a machine makes a "decision about the individual that could have a significant impact on them", then it applies. Otherwise, not so much. Again, I could be wrong, missing something or not really understanding what I've just read.

    I haven't really understood how this applies to moderation. I suspect it's about when moderation system is building an user profile and feeding it to an automatic decision system, but that'd be weird (because every single mailhost with an anti-spam system would be affected).

    Search engines, on the other hand? If I got it right, only those that do "personalized results" are affected.

    So, some LLM processing search hits for a query and making a decision on what's good result and what's a spam is seemingly not covered. Tracking cookies seem to be the core component of this thing, not AI.

  • by gustavus on 10/10/23, 3:40 AM

    > The regulation plans are revealed in a letter from ISED Minister François-Philippe Champagne to the Industry committee studying Bill C-27, the privacy reform and AI regulation bill. The government is refusing to disclose the actual text of planned amendments to the bill.

    So like between this and their online harms bill they are proposing not to mention their massive screw up with their other tech bills, is CA just trying to cut the Internet out of the lives of their citizens? Seriously what is going on there? Anyone in CA want to comment?

  • by ggm on 10/10/23, 4:08 AM

    The "refused to discuss draft state" thing is remarkably common. I am unsure why so many governments do this. I suspect it's because a large number of bad ideas get dropped before public draft, and nobody wants to wear the newspaper headlines which say "Government considering mandating second head for all voting citizens" when the 2 head idea is only going to get dropped before the document goes live.

    There's also the problem of multi-draft convergence. They don't want to "out" a proposal being thought out in concert with another agency, or economy. In effect the document has to remain sealed until the most important one decides it's time to declare. So, across the life of the drafting there are huge reserved chunks. This happened in the TPP docs, which had sealed sections. They enraged local IT people in Australia because of the strong likelihood of the US trying to wedge the Australian 'right to repair' laws and their effect on DRM/IPR locks.

    I have very briefly seen 'behind the curtain' of this kind of thing (a ministerial communique for a meeting) and people take the secrecy seriously, because you won't be invited back to draw up the next one, if you blab about this one. It doesn't have to reflect anything particularly special. I think it's likely the mystique is maintained for it's own sake: Vanity.

  • by filereaper on 10/10/23, 3:31 AM

    I strongly encourage watching Bill Gurley's talk from the All-In Summit about whenever Government gets involved with regulations and technology.

    https://www.youtube.com/watch?v=F9cO3-MLHOM

  • by jrockway on 10/10/23, 4:17 AM

    That sound you hear is billions of dollars being flushed down the toilet as tech companies pay expert witnesses (and lots of attorneys) to convince judges and juries that they're using "algorithms" instead of "artificial intelligence". (Keep telling the shareholders it's AI, though.)

    This kind of reminds me of the whole DeCSS thing. Everyone ripped their DVDs. Several years too late, it was declared "illegal". Someone golfed it down to a t-shirt. People continued removing the digital restrictions from their media. Physical media is now dead as an industry, and literally nobody minds. Everyone is happier and the content creators make more money than ever before. AI feels like it's going down the same route. I have already seen the "generative AI in 40 lines of C", and those can easily be t-shirts. Whatever efforts are made to make AI illegal at this point are too little too late. The cat is out of the bag. Whatever you think is going to happen because of AI will probably happen. So it's probably time to find some cash to solve those problems, instead of paying expert witnesses to define what "is is".

    No government in human history has ever succeeded in making math illegal. Canada is unlikely to be the first. (It is even more futile than making alcohol illegal, which still makes me laugh when I think about it. You leave sugar water out in air and it turns into alcohol. You can't solve that "problem" by signing a piece of paper that says a government says so. You are going to need more fungicide than humanity has the capability to produce. It is an unwinnable war. Fighting against AI is going to be like that.)

    I think the hype about the downfall of humanity is probably overrated (a bold claim, because if I'm wrong you won't be around to make fun of me). The biggest impact will probably be things like banks denying loans to people for the wrong reasons. If that's what we're worried, make some laws that makes that illegal. It shouldn't matter what tools you use to get a result; whether you use AI or a bunch of employees sitting in a room with a pencil and paper to run an illegal business, the outcome is the same, the business has illegal practices and the government should step in.

  • by emptybits on 10/10/23, 4:15 AM

    Canadian here. Government sticking its nose into AI bias in human rights issues like law enforcement powers and employment standards … OK. But regulating search algorithms … feels worlds apart in importance and maybe too far.
  • by incomingpain on 10/10/23, 11:58 AM

    To be clear, Canada is regulating speech online, not just AI and search. Also by Michael Geist not long ago: https://www.michaelgeist.ca/2023/10/crtcregistrationruling/

    Basically a service like Rumble, which is publicly traded and has HQ in Toronto must register with the Canadian government to be allowed to continue existing.

    All political podcasts have been ordered to register with the government.

    Lets not forget: https://www.international.gc.ca/global-affairs-affaires-mond...

    To be a new journalist, you have to be accredited by the government. Naturally the government doesn't accredit their critics and bans them from going to things like the debates.

  • by andrewstuart on 10/10/23, 3:31 AM

    No doubt this will prick up the ears of the luddite Australian government who will become extremely enthused about doing the same thing.

    I just wish our governments would leave tech alone.

  • by snapplebobapple on 10/10/23, 5:34 AM

    I'm sorry guys, Eastern Canada has lost it's mind (and has been this way for some time) and keeps electing idiots....
  • by k12sosse on 10/10/23, 12:23 PM

  • by olliej on 10/10/23, 8:59 AM

    Is this going to result in less use of "AI" or less use of "AI" as a marketing term for "glorified statistical model"?

    I'm not particularly convinced by claims of "intelligence" but that's not the point: the article uses "artificial intelligence system" sounding like that's the text, but what happens if you just say "this is a multidimensional statistical model", there is no intelligence, it is simply an automatically generated model?

  • by narenkeshav on 10/10/23, 4:26 AM

    Canada is turning out to be a banana republic.
  • by GaggiX on 10/10/23, 3:43 AM

    Why? What's wrong about using AI to moderate your platform? Are you supposed to underpaid workers from Colombia to do it and make them see horrible content?