• 0 Posts
  • 490 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle
  • I understand it fine, and it’s not just a packaging phenomonon, all sorts of software developers have stopped trying to have consensus on platform and instead ‘just ship the box’. 99% of the time a python application will demand at least virtualenv. Golang, well, you are just going to staticly build (at least LTO means less unrelated stuff comes along for the ride). Of course docker style packaging is bring the whole distro. I’ll give credit to snap and flatpak that at least allow packaging to have external dependency packages to mitigate it somewhat.


  • I’d say actually a bit of the opposite. Generally speaking we don’t need a new package manager or init system, and better hardware support is almost entirely a kernel concern (one might make an argument that the loose bits of key management and tpm2 tools and authentication agents could be better integrated for “Windows Hello” type function I suppose, but I doubt that’s what the meme had in mind.

    Not really needing to reinvent the wheel on those, we got a variety of wheels, sometimes serving different sensibilities, sometimes any difference in capability went away long ago (rpm/dnf v. deb/apt).

    The best motivation I can think of at this point is to make specialty distribution that is ‘canned’ toward a specific use case. Even then it’s probably best to be an existing distribution under the covers. I think Proxmox is a good example, it’s just Debian but installer made to just do Proxmox. You want to do automated installation? Just use Debian and then add Proxmox (the official recommendation), because they have no particular insight on automated deployment, so why not just defer to an existing facility?

    The biggest conceptual change in packaging has been “waste as much disk as you like duplicating dependencies to avoid conflicting dependencies”, maybe with “use namespace and cgroup isolation to better control app interactions” and we have snap, flatpak, appimage, and nix very well covering the gamut for that concept.

    For init, we have the easy to modify sysv init, or the more capable but more inscrutable systemd. I don’t see a whole lot of opportunity between those two sorts of options already.








  • Like I get and appreciate the CLI and for networking, that’s pretty much all I’m using anyway, but I am shocked that enterprise networking doesn’t even bother to do any GUI. Once upon a time Mellanox Onyx bothered to do a GUI and I could see some people light up, finally an enterprise switch that would let them do some stuff from a GUI. Then nVidia bought them and Cumulus and ditched their GUI.

    There’s this kind of weird “turn in your geek card” culture about rejecting GUIs, but there’s a good amount of the market that want at least the option, even if they frankly are a bit ashamed to admit it. You definitely have to move beyond GUI if you want your tasks to scale, but not every engagement witih the technology needs to scale.


  • While you don’t need to memorize button locations and menus, the frustration is that it takes longer, and memorizing those details slightly mitigates. It’s torture helping someone do something while they hunt for the UI element they need to get to the next level of hierarchy. They will do it, in time, but it just feels like an eternity.

    The main issue in GUI versus CLI is that GUI narrows the available options at a time. This is great, for special purpose usage. But if you have complex stuff to do, a CLI can provide more instant access to a huge chunk of capabilities, and provide a framework for connecting capabilities together as well as a starting point for making repeatable content, or for communicating in a forum how to fix something. Just run command “X” instead of a series of screenshots navigating to the bowels of a GUI to do some obscure thing.

    Of course UI people have generally recognized the power and usefulness of text based input to drive actions and any vaguely powerful GUI has to have some “CLI-ness” to it.


  • I suppose the point is that the way people interact with GUIs actually resembles how they interact with CLIs. They type from memory instead of hunting through a nested hierarchy to get where they were going. There was a time when Desktop UIs considered text input to be almost a sin against ease of use, an overcorrection for trying to be “better” than CLI. So you were made to try to remember which category was deignated to hold an application that you were looking for, or else click through a search dialog that only found filenames, and did so slowly.

    Now a lot of GUIs incorporate more textual considerations. The ‘enter text to launch’ is one example, and a lot of advanced applications now have a “What do you want to do?” text prompt. The only UI for LLMs is CLI, really. One difference is GUI text entry tends to be a bit “fuzzier” compared to a traditional CLI interface which is pretty specific and unforgiving.


  • In a pretty high end high tech company, there’s still lots of people who see a terminal and think “ha hah, they are still stuck in old mainframe stuff like you used to see in the movies”.

    My team determined long ago that we have to have two user experiences for our team to be taken seriously.

    A GUI to mostly convince our own managers that it’s serious stuff. Also to convince clients who have execs make the purchasing decisions without consulting the people that will actually use it.

    An API, mostly to appease people who say they want API, occasionally used.

    A CLI to wrap that API, which is what 99% of the customers use 95% of the time (this target demographic is niche.

    Admittedly, there’s a couple of GUI elements we created that are handy compared to what we can do from CLI, from visualizations to a quicker UI to iterate on some domain specific data. But most of the “get stuff done” is just so much more straightforward to do in CLI.


  • Could keep all of them that don’t have annual fees, and spread out your purchasing. I have three cards, one that’s 2% off everything, and one that’s more off food, and another that’s more off online purchases. My aggregate credit limit is pretty high even if each one were a bit modest (they aren’t as modest as they used to be though)

    You can always pay off your balance more often than monthly. When I first opened my first card, I paid it off every Friday, to make sure the small limits were available if I needed them (I had a credit limit of $1,000 back then). Now I pay them off every payday, still multiple times a month. If you need to carry a large balance across payment cycles, you’ll get stuck on a high interest rate treadmill you don’t want to be on anyway.

    The credit limits increase with time. The $1,000 card I started with now has a $10,000 limit. Mostly the limits came automatically, but I did request an increase to be able to pay for a home repair in a single transaction. Now between the three cards I have a lot of limit.

    A fair number of places where you might want to spend a lot of money in a single transaction won’t accept credit cards anyway over a threshold. Last time I bought a car after establishing the price I asked about just charging it to a credit card. They were willing to do it only for $2,000, so I had to cut a check for most of the car anyway.



  • For the scope of WebEx and Zoom, it’s… fine… mostly. I mean I hate that I can’t really full screen a remote screen share, so it could be better, but broadly speaking, video, audio, and screen sharing is fine. Not coincidentally, this is pretty much the only standalone stuff Teams bothered to uniquely implement, most everything else is built upon sharepoint…

    It starts getting annoying for chat platform. You want to scroll back, it’s going to be painfully slow. You participate in cross-company conversations, oh boy you get to deal with the worst implementation of instancing to keep your activity segregated I have seen. Broadly speaking it just scales poorly at managing the sorts of conversations you have at a larger company. If your conversations are largely “forget it after a few hours”, you may be fine.

    Then you get into what these platforms have been doing for ages, Lotus Notes and Sharepoint suggesting companies build workflows on top of their platform. Now the real pain and suffering begins.


  • Hell, put any two people on a “knowledge” task and even if both were capable, there’s going to be a person that pretty much does the work and another that largely just sits there. Unless the task has a clear delineation, but management almost never assigns a two person team a task that’s actually delineated enough for the two person team to competently work.

    If the people earnestly try, they’ll just be slower as they step on each other, stall on coordination, and so on.


  • It really can’t. It can take your original prompt and fluff it out to obnoxiously long text. It can take your visual concept and sometimes render roughly the concept you describe (unless you hit an odd gap in the training data, there’s a video of image generation being incapable of generating a full wine glass of wine).

    A pattern I’ve seen is some quick joke that might have been funny as a quick comment, but the poster asks an LLM to make a “skit” of it and posts a long text that just utterly wears out the concept. The LLM is mixing text content in a way consistent with the prompt, but it’s not mixing in any creatively constructed comment, only able to drag bits represented in the training data.

    Now for image generation, this can be fine. The picture can be nice enough in a way analogous to meme text on well known pictures is adequate. Your concept can only ever generate a picture, and a picture doesn’t waste the readers time like a wall of text does. However if you come at an LLM with specific artistic intent, then it will frustrate as it won’t do precisely what you want, and it’s easier to just do it yourself at some point


  • I assume there’s a large amount of people who do nothing but write pretty boilerplate projects that have already been done a thousand times, maybe with some very milquetoast variations like branding or styling. Like a web form doing one to one manipulations of some database from user input.

    And/or a large number of people who think they need to be seen as “with it” and claim success because they see everyone else claim success. This is super common with any hype phase, where there’s a desperate need for people to claim affinity with the “hot thing”.


  • And because a friend insisted that it writes code just fine.

    It’s so weird, I feel like I’m being gaslit from all over the place. People talking about “vibe coding” to generate thousands of lines of code without ever having to actually read any of it and swearing it can work fine.

    I’ve repeatedly given LLMs a shot and always the experience is very similar. If I don’t know how to do it, neither does it, but it will spit out code confidently, hallucinating function names or REST urls as needed to fit the narrative that would have been convenient. If I can’t spot the logic issue with some code that isn’t acting correct, it will also fail to generate useful text that would describe the problem.

    If the query is within reach of copy/paste of the top stack overflow answer, then it can generate the code. The nature of LLM integration with IDEs makes the workflow easier to pull in than stack overflow answers, but you need to be vigilant as it’s impossible to tell a viable result from junk, as both are presented with equal confidence and certainty. It can also do a better job of spotting issues within things like key values that are strings with typo than traditional code analysis, and by extension errors in less structured languages like Javascript and Python (where ‘everything is a hash/dictionary’ design prevails).

    So far I can’t say I’ve seen improvements, I see how it could be seen as valuable, but the resulting babysitting carries a cost that has been more annoying than the theoretical time saves. Maybe for more boilerplate tasks, but generally speaking those are highly wrapped by libraries already, and when I have to create significant volume of code, it’s because there’s no library and if there’s no library, it’s niche enough that the LLMs can’t generate either.

    I think the most credible time save was a report of refreshing an old codebase that used a lot of deprecated function and changing most of the calls to the new method without explicit human intervention. Better than tools like ‘2to3’ for python, but still not magical either.


  • Would have to be mandated by workplace regulations, no company is going to voluntarily educate their employees that more money has no downside.

    I’ll also say this doesn’t help, it strangely avoids the actual numbers. It should state explicitly that his total taxes would be $1,600+$4,266+$2,827=$8692, and not $13200. Needs to include the scenarios specific results and contrasted with what the viewer would have assumed otherwise.