• 0 Posts
  • 53 Comments
Joined 2 years ago
cake
Cake day: July 4th, 2023

help-circle

  • By the same logic, raytracing is ancient tech that should be abandoned.

    Nice straw man argument you have there.

    I’ll restate, since my point didn’t seem to come across. All of the “AI” garbage that is getting jammed into everything is merely scaled up from what has been before. Scaling up is not advancement. A possible analogy would be automobiles in the late 60s and 90s: Just put in more cubic inches and bigger chassis! More power from more displacement does not mean more advanced. Continuing that analogy, 2.0L engines cranking out 400ft-lb and 500HP while delivering 28MPG average is advanced engineering. Right now, the software and hardware running LLMs are just MOAR cubic inches. We haven’t come up with more advanced data structures.

    These types of solutions can have a place and can produce something adjacent to the desired results. We make great use of expert systems constantly within narrow domains. Camera autofocus systems leap to mind. When “fuzzy logic” autofocus was introduced, it was a boon to photography. Another example of narrow-ish domain ML software is medical decision support software, which I developed in a previous job in the early 2000s. There was nothing advanced about most of it; the data structures used were developed in the 50s by a medical doctor from Columbia University (Larry Weed: https://en.wikipedia.org/wiki/Lawrence_Weed). The advanced part was the computer language he also developed for quantifying medical knowledge. Any computer with enough storage, RAM, and the hardware ability to quickly traverse the data structures can be made to appear advanced when fed with enough collated data, i.e. turning data into information.

    Since I never had the chance to try it out myself, how was your neural network and LLMs reasoning back in the day? Imo that’s the most impressive part, not that it can write.

    It was slick for the time. It obviously wasn’t an LLM per se, but both were a form of LM. The OCR and auto-suggest for DOS were pretty shit-hot for x386. The two together inspried one of my huge projects in engineering school: a whole-book scanner* that removed page curl and gutter shadow, and then generated a text-under-image PDF. By training the software on a large body of varied physical books and retentively combing over the OCR output and retraining, the results approached what one would see in the modern suite that now comes with your scanner. I only achieved my results because I had unfettered use of a quad Xeon beast in the college library where I worked. That software drove the early digitization processes for this (which I also built): http://digitallib.oit.edu/digital/collection/kwl/search

    *in contrast to most book scanning at the time, which required the book to be cut apart and the pages fed into an automatically fed scanner; lots of books couldn’t be damaged like that.

    Edit: a word


  • No, no they’re not. These are just repackaged and scaled-up neural nets. Anyone remember those? The concept and good chunks of the math are over 200 years old. Hell, there was two-layer neural net software in the early 90s that ran on my x386. Specifically, Neural Network PC Tools by Russell Eberhart. The DIY implementation of OCR in that book is a great example of roll-your-own neural net. What we have today, much like most modern technology, is just lots MORE of the same. Back in the DOS days, there was even an ML application that would offer contextual suggestions for mistyped command line entries.

    Typical of Silicon Valley, they are trying to rent out old garbage and use it to replace workers and creatives.


  • Sailors know your pain all too well. The key to preventing this is air movement. The less expensive option is some kind of material to put in between your cot and mattress, such as Hypervent Aire-Flow or Dri-Deck. An expensive solution is a Froli System, which has the added benefit of allowing you to tune the firmness for different parts of your body. I have a Froli under all of the bunks on my boat; condensation and mildew are no longer a thing now. But the price is steep.




  • They were acquired by Opta Group in 2023. Since then, the quality has declined while prices increased. And around the time of their acquisition, they started doing some shady stuff when claiming USB-IF compliance. The cables were blatantly not USB-IF compliant.

    Another example: I personally love my Anker GaN Prime power bricks and 737. Unfortunately, among my friends and peers, I am the exception. The Prime chargers are known for incorrectly reading cable eMarkers and then failing to deliver the correct power. This has so far been an issue for me twice, but was able to be worked around.








  • But is it similar to how a compiler uses high level syntax to generate low level assembly code?

    This is an apt comparison, actually.

    Is compiling a type of automatic code generation?

    This is also an apt comparison. Most modern languages are interpreted rather than compiled. C#*, Java, Ruby, Python, Perl… these all sit on top of runtimes or virtual machines such as .NET or JVM. Compilation is a process of turning human-readable language into assembly. Interpreting turns human-readable programming language into instructions for the runtime; in the case of .NET, C# gets interpreted into MSIL which tells the .NET runtime what to do, which in turn tells the hardware what to do.

    Automatic code generation is more of “Hey computer, look at that code. Now translate that code to do different things, but use these templates I made.”

    FWIW, compilers was two semesters in engineering school, so I’m trying to keep this discussion accessible.

    *Before anyone rightfully and correctly jumps on my shit about C#, yes, I know C# is technically a compiled language.


  • Is automatic code generation LLM

    Not at all. In my case, automatic code generation is a process of automated parsing of an existing Ruby on Rails API code plus some machine-readable comments/syntax I created in the RoR codebase. The way this API was built and versioned, no existing Gem could be used to generate docs. The code generation part is a set of C# “templates” and a parser I built. The parser takes the Ruby API code plus my comments, and generates unit and integration tests for nUnit. This is probably the most common use case for automatic code generation. But… doesn’t building unit tests based on existing code potentially create a bad unit test? I’m glad you asked!

    The API endpoints are vetted and have their own RoR tests. We rebuilt this API in something more performant than Ruby before we moved it to the cloud. I also built generators that output ASP.NET API endpoint stubs with documentation. So the stubs just get filled out and the test suite is already built. Run Swashbuckle on the new code and out comes the OpenAPI spec, which is then used to build our documentation site and SDKs. The SDKs and docs site are updated in lockstep with any changes to the API.

    Edit: extra word and spaces


  • I tightly curated my feeds to stick to trusted sources on specific topics. The most “controversial” topic in my feed might be how to cook certain things certain ways or maybe business analysis. The rest of my topics are known, trustworthy primary sources for things such as software, electrical, and mechanical engineering, culinary science and techniques.

    There’s also a bunch of “how to more efficiently do [thing that I already do] with [system I already use/own].” It’s pretty difficult to get suckered into misinformation on techniques for automatic code generation in C# or how to cook a carbonara sauce from the author whose books I already own.

    Something that really helps is never clicking on anything like “I should have bought this years ago” or any similar shit. I realize that I might be missing out on things that would actually make a certain task easier. But if it’s really life changing, I’m sure one of my trusted sources, online or otherwise, will get around to suggesting it to me.

    Staying away from talking heads, even ones I like, goes a very long way to preventing blatant bullshit ever getting suggested. I click quite often on “don’t suggest again.” It’s a chunk of effort up front, but then it’s a small amount of maintenance from there.


  • It absolutely happens. Most of my long term partners were that “sparks at first sight” energy. In high school, my first girlfriend and I saw each other from across the bus waiting zone, and it was on. Even our parents were blown away by our chemistry. Unfortunately, she died of acute lymphocytic leukemia two years later. My first wife and I spotted each other from across a nightclub dancefloor. I thought she gave me a fake phone number, but turned out to be real. I was on a bike tour, stopped at a winery, and met an amazing woman who became my second wife 18 months later.

    But here’s the problem with that instant connection: it’s almost always a very bad sign. Those instant sparks are indicative of non-verbal cues that both people fit a mutually faulty template. For people who have unaddressed trauma, that template is just waiting to be matched, and it produces disastrous results in the majority of relationships. John Gottman at University of Washington has studied intimate interpersonal dynamics in depth; he and his lab have literally written the book(s) on how to have healthy, fulfilling relationships. Spoiler alert: instant attraction should be a red flag for about 99% of the population.

    But yeah, get professional help.




  • Oh, I skip FB and IG ads completely. It’s crazy: I didn’t even have to install anything, and the ads just disappeared one day.

    But seriously, the “your attention is being monetized” model makes for such an awful experience for me. I’m envious of people who can enjoy the world and the Internet when ads are everywhere.