• 2 Posts
  • 15 Comments
Joined 1 year ago
cake
Cake day: November 10th, 2024

help-circle
  • Thanks, I missed that detail. It’s probably because of the “no class action” clause that this is a “mass arbitration”.

    Unfortunately that usually means that Google is paying a specific company to decide on the outcome of the case. in this case it looks like American Arbitration Association has a contract with Google.

    They’re supposed to be fair for both sides, but it’s been shown that they almost always rule in favor of the company that has pre-selected them.

    If anyone is in this situation, they will likely have a much better chance by convincing a judge to allow a different 3rd party to arbitrate the case.



  • Following up on this. I sent an email out to the team and got a response already.

    To summarize, they would rather the solution work through updates for security fixes, but they were willing to compromise if automatic updates were disabled with the option for users to manually update somehow:

    Tap for email/response

    Initial email:

    Hi,

    Just a quick question about this point in the bounty:

    - Restore the fridge to its original functionality, by removing any possibility of adverts being presented on the display (all other smart features must be retained)

    When you say, “all other smart features must be retained” does this mean that the solution must retain the ability to allow the fridge to automatically update its firmware if Samsung pushes out a future update?

    Would it be okay if, instead, we disabled the automatic update but still allowed the end user to manually update if they really wanted to?

    Or would it be okay if the end user could just reapply the solution after an official firmware update?

    Thanks,
    <Redacted>

    Response:

    Hey <Redacted>,

    Just chatted with the team, and we think it would be better for it to have updates, and optional ones sounds like a sensible compromise. We don’t want to sacrifice security for control. I hope that answers your question. Thanks!








  • Awesome image.

    Minor nitpick here, but as someone who has actually experienced totality, there is one major issue with this image. During totality it gets dark to the level of a little after sunset (enough to trigger streetlights with automated sensors to turn on). Then, imagine looking in the direction of where the sun has already set and seeing the glow of a fading sunset. However, instead of coming from one direction, this glow is happening in every direction that you can see.

    Basically there would be more color coming from behind the marine layer.

    That being said, you could always claim that this is totality being experienced in some other solar system.



  • The study focuses on general questions asked of “market-leading AI Assistants” (there is no breakdown between which models were used for what).

    It does not mention ground.news, or models that have been fed a single article and then summarized. Instead this focuses on when a user asks a service like ChatGPT (or a search engine) something like “what’s the latest on the war in Ukraine?”

    Some of the actual questions asked for this research: “What happened to Michael Mosley?” “Who could use the assisted dying law?” “How is the UK addressing the rise in shoplifting incidents?” “Why are people moving to BlueSky?”

    https://www.bbc.co.uk/aboutthebbc/documents/audience-use-and-perceptions-of-ai-assistants-for-news.pdf

    With those questions, the summaries and attribution of sources contain at least one significant error 45% of the time.

    It’s important to note that there is some bias in this study (not that they’re wrong).

    They have a vested interest in proving this point to drive traffic back to their articles.

    Personally, I would find it more useful if they compared different models/services to each other as well as differences between asking general questions about recent news vs feeding specific articles and then asking questions about it.

    With some of my own tests on locally run models, I have found that the “reasoning” models tend to be worse for some tasks than others.

    It’s especially noticeable when I’m asking a model to transcribe the text from an image word for word. “Reasoning” models will usually replace the ending of many sentences with what it sounded like the sentence was getting at. While some “non-reasoning” models were able to accurately transcribe all of the text.

    The biggest takeaway I see from this study is that, even though most people agree that it’s important to look out for errors in AI content, “when copy looks neutral and cites familiar names, the impulse to verify is low.”




  • I agree with what you said. The only thing I want to point out is with your statement:

    (+ its better for the environment)

    Running models locally doesn’t necessarily mean that it’s better than the environment. Usually the hardware at cloud data centers is far more efficient at running intense processes like LLMs than your average home setup.

    You would have to factor in whether your electricity provider is using green energy (or if you have solar) or not. And then you would also have to factor in whether you’re choosing to use a green data center (or a company that uses sustainable data centers) to run the model.

    That being said, (in line with what you stated before) given the sensitive nature of the conversations this individual will be having with the LLM, a locally run option (or at least renting out a server from a green data center) is definitely the recommended option.