• 0 Posts
  • 230 Comments
Joined 2 years ago
cake
Cake day: June 22nd, 2023

help-circle




  • Thanks, I didn’t see this, there was a different embedded FAQ that didn’t have the specific Q & A below.

    But, if anything, it seems to confirm the ad itself is just legitimately clicked from the user’s IP address and hidden from the user, and that there is code execution protection, but not that there is any privacy protection? It’s still very ambiguous.

    How does AdNauseam “click Ads”?

    AdNauseam ‘clicks’ Ads by issuing an HTTP request to the URL to which they lead. In current versions the is done via an XMLHttpRequest (or AJAX request) issued in a background process. This lightweight request signals a ‘click’ on the server responsible for the Ad, but does so without opening any additional windows or pages on your computer. Further it allows AdNauseam to safely receive and discard the resulting response data, rather than executing it in the browser, thus preventing a range of potential security problems (ransomware, rogue Javascript or Flash code, XSS-attacks, etc.) caused by malfunctioning or malicious Ads. Although it is completely safe, AdNauseam’s clicking behaviour can be de-activated in the settings panel.









  • I have now!

    I think for me, the challenge is finding something that breaks down trends and ideas without resorting to discourse that’s been overworked. Vocabulary that already has been politicized by society won’t change any minds because exposure immunizes people against ideas, even good ones. The revolutionary idea becomes mundane given exposure plus time.

    That’s what I think is unique about Adam Curtis, is he studiously avoids any framing that feels like a rote “capitalism” critique, but instead speaks to something more fundamental to human nature.




  • I get that it’s usually just a dunk on AI, but it is also still a valid demonstration that AI has pretty severe and unpredictable gaps in functionality, in addition to failing to properly indicate confidence (or lack thereof).

    People who understand that it’s a glorified autocomplete will know how to disregard or prompt around some of these gaps, but this remains a litmus test because it succinctly shows you cannot trust an LLM response even in many “easy” cases.