• 0 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: March 3rd, 2024

help-circle

  • People are making fun of the waffling and the apparent indecision and are missing the point. Trump isn’t flailing and trying to figure out how to actually make things work. He’s doing exactly what he intended: he’s holding the US economy for ransom and building a power base among the billionaires.

    He used the poor and ignorant to get control of the public institutions, and now he’s using that power to get control over the private institutions (for-profit companies). He’s building a carbon copy of Russia with himself in the role of Putin. He’s almost there, and it’s taken him 2 months to do it.


  • The author hits on exactly what’s happening with the comparison to carcinisation: crustacean evolution converges to a crab like form because that’s the optimization for the environmental stresses.

    As tiramichu said in their comment, digital platforms are converging to the same form because they’re optimizing for the same metric. But the reason they’re all optimizing that metric is because their monetization is advertising.

    In the golden days of digital platforms, i.e. the 2010s, everything was venture capital funded. A quality product was the first goal, and monetization would come “eventually.” All of the platforms operated this way. Advertising was discussed as one potential monetization, but others were on the table, too, like the “freemium” model that seemed to work well for Google: provide a basic tier for free that was great in its own right, and then have premium features that power users had to pay for. No one had detailed data for what worked and what didn’t, and how well each model works for a given market, because everything was so new. There were a few one-off success stories, many wild failures from the dotcom crash, but no clear paths to reliable, successful revenue streams.

    Lots of products now do operate with the freemium model, but more and more platforms had moved and are still moving to advertising ultimately because of the venture capital firms that initially funded them have strong control over them and have more long term interest in money than a good product. The data is now out there that the advertising model makes so, so much more money than a freemium model ever could in basically any market. So VCs want advertising, so everything is TikTok.


  • The open availability of cutting-edge models creates a multiplier effect, enabling startups, researchers, and developers to build upon sophisticated AI technology without massive capital expenditure. This has accelerated China’s AI capabilities at a pace that has shocked Western observers.

    Didn’t a Google engineer put out a white paper about this around the time Facebook’s original LLM weights leaked? They compared the rate of development of corporate AI groups to the open source community and found there was no possible way the corporate model could keep up if there were even a small investment in the open development model. The open source community was solving in weeks open problems the big companies couldn’t solve in years. I guess China was paying attention.


  • Ah, I think I misread your statement of “followers by nature” as “followers of nature.” I’m not really willing to ascribe personality traits like “follower” or “leader” or “independent” or “critical thinker” to humanity as a whole based on the discussion I’ve laid out here. Again, the possibility space of cognition is bounded, but unimaginatively large. What we can think may be limited to a reflection of nature, but the possible permutations that can be made of that reflection are more than we could explore in the lifetime of the universe. I wouldn’t really use this as justification for or against any particular moral framework.


  • I think that’s overly reductionist, but ultimately yes. The human brain is amazingly complex, and evolution isn’t directed but keeps going with whatever works well enough, so there’s going to be incredible breadth in human experience and cognition across everyone in the world and throughout history. You’ll never get two people thinking exactly the same way because of the shear size of that possibility space, despite there having been over 100 billion people to have lived in history and today.

    That being said, “what works” does set constraints on what is possible with the brain, and evolution went with the brain because it solves a bunch of practical problems that enhanced the survivability of the creatures that possessed it. So there are bounds to cognition, and there are common patterns and structures that shape cognition because of the aforementioned problems they solved.

    Thoughts that initially reflect reality but that can be expanded in unrealistic ways to explore the space of possibilities that an individual can effect in the world around them has clear survival benefits. Thoughts that spring from nothing and that relate in no way to anything real strike me as not useful at best and at worst disruptive to what the brain is otherwise doing. Thinking in that perspective more, given the powerful levels of pattern recognition in the brain, I wonder if creation of “100% original thoughts” would result in something like schizophrenia, where the brain’s pattern recognition systems are reinterpreting (and misinterpreting) internal signals as sensory signals of external stimuli.


  • The problem with that reasoning is it’s assuming a clear boundary to what a “thought” is. Just like there wasn’t a “first” human (because genetics are constantly changing), there wasn’t a “first” thought.

    Ancient animals had nervous systems that could not come close to producing anything we would consider a thought, and through gradual, incremental changes we get to humanity, which is capable of thought. Where do you draw the line? Any specific moment in that evolution would be arbitrary, so we have to accept a continuum of neurological phenomena that span from “not thoughts” to “thoughts.” And again we get back to thoughts being reflections of a shared environment, so they build on a shared context, and none are original.

    If you do want to draw an arbitrary line at what a thought is, then that first thought was an evolution of non-/proto-thought neurological phenomena, and itself wasn’t 100% “original” under the definition you’re using here.


  • From your responses to others’ comments, you’re looking for a “thought” that has absolutely zero relationship with any existing concepts or ideas. If there is overlap with anything that anyone has ever written about or expressed in any way before, then it’s not “100% original,” and so either it’s impossible or it’s useless.

    I would argue it’s impossible because the very way human cognition is structured is based on prediction, pattern recognition, and error correction. The various layers of processing in the brain are built around modeling the world around us in a way to generate a prediction, and then higher layers compare the predictions with the actual sensory input to identify mismatches, and then the layers above that reconcile the mismatches and adjust the prediction layers. That’s a long winded way to say our thoughts are inspired by the world around us, and so are a reflection of the world around us. We all share our part of this world with at least one other person, so we’re all going to share commonalities in our thoughts with others.

    But for the sake of argument, assume that’s all wrong, and someone out there does have a truly original, 100% no overlap with anything that has come before, thought. How could they possibly express that thought to someone else? Communication between people relies on some kind of shared context, but any shared context for this thought means it’s dependent on another idea, or “prior art,” so it couldn’t be 100% original. If you can’t share the thought with anyone, nor express it in any way to record it (because that again is communication), it dies with you. And you can’t even prove it without communicating, so how would someone with such an original thought convince you they’ve had it?


  • Math, physics, and to a lesser extent, software engineering.

    I got degrees in math and physics in college. I love talking about counterintuitive concepts in math and things that are just way outside everyday life, like transfinite numbers and very large dimensional vector spaces.

    My favorite parts of physics to talk about are general relativity and the weirder parts of quantum mechanics.

    My day job is software engineering, so I can also help people get started learning to program, and then the next level of building a solid, maintainable software project. It’s more “productive” in the traditional sense, so it’s satisfying to help people be more productive, but when it’s just free time to shoot the shit, talking about math and science are way more fun.


  • I’m sorry, I mostly agree with the sentiment of the article in a feel-good kind of way, but it’s really written like how people claim bullies will get their comeuppance later in life, but then you actually look them up later and they have high paying jobs and wonderful families. There’s no substance here, just a rant.

    The author hints at analogous cases in the past of companies firing all of their engineers and then having to scramble to hire them back, but doesn’t actually get into any specifics. Be specific! Talk through those details. Prove to me the historical cases are sufficiently similar to what we’re starting to see now that justifies the claims of the rest of the article.



  • I’m not the person you’re replying to, but I think their point is that the bars don’t scale linearly. The red bar (2014 price) for the McChicken is supposed to represent $1 and the yellow bar (2024 price) ~$3, but the yellow bar is not 3 times the length of the red bar. This means the relative differences between the bar lengths doesn’t match the percent increase number printed above then. This is most egregious comparing relative differences between the McChicken and the Quarter Pounder with Cheese meal: why does a 122% increase look so much worse than the 199% increase?

    I suspect the cause of problem is that the small bars were stretched a bit to fit printing the dollar value within then, but if it throws off the visual accuracy of the bars, what’s the point of using bars at all?


  • Robin Williams as the Bicentennial Man. The movie was okay; his performance was amazing. I’ve struggled with mortality for a while, like I expect a lot of people do, and to see him as a character who started their existence immortal, and to choose mortality. His death in the movie hit me much, much harder than I expected. I haven’t watched the movie again since my first viewing because I’m honestly afraid of going through that again.




  • Not OP, but in my circles the simplest, strongest point I’ve found is that no cryptocurrency has a built-in mechanism for handling mistakes. People are using these systems, and people make mistakes. Without built in accommodations, you’re either

    1. Creating real risk for anyone using the system, because each mistake is irrecoverable financial loss, and that’s pretty much the definition of financial risk, or
    2. Encouraging users to subvert the system in its core functionality in order to accommodate mistakes, which undermines the entire system and again creates risk because you don’t really know how anything is going to work with these ad hoc side systems

    Either way, crypto is just more costly to use than traditional systems when you properly factor those risks. So the only people left using it are those who expect greater rewards to offset all that additional risk, which are just speculators and grifters.