Probably should’ve been “installing and using nvidia drivers”.
Probably should’ve been “installing and using nvidia drivers”.
Certainly, and as I (suspect) to have AuDHD (ADHD diagnosed).
This combination is really difficult to see/diagnose, as these conditions somewhat cancel each other out. It took me a very hyperfocused deep-dive into all kinds of papers, to slowly come to that conclusion, that ADHD doesn’t explain my behavior alone. AFAIK this is in some regard an active research-area (how correlated are these conditions, are they even the same underlying condition?).
(I think) few psychiatrists really have a deep insight into that (and thus are accurately diagnosing these).
I would say foremost: strong opinions and idealism (very much correlated to ASD and ADHD) E.g. about the fucked up state of centralized social media controlled by right-wing billionaires.
Always when I talk to other people I don’t suspect to be neurodivergent, they just don’t really care about it, convenience is the driving factor.
I mean for me both works (Ritalin just is a lot shorter and more up and downs, generally less effective).
Though, I’m indeed prescribed Elvanse it’s basically the hyperfocus drug IME (YMMV).
I’m really productive with it (I’m a passionate programmer, which probably helps), but sometimes well a little bit too productive (burning through complex problems for > 10 hours the day, sometimes completely ignoring other stuff I should be doing as well, and am somewhat exhausted after somehow escaping that hyperfocus, or finishing the issue). As I got “smarter” through it and like a learn a lot, I’ll just accept this as a net-positive effect, I have to deal with.
But I have more control over what I’m hyperfocusing at (as I’m less likely bored and distacted), and try to “focus” this on issues that deserve this hyperfocus.
I mean, I think the count of neurodiverse people on lemmy is likely very high (for various reasons). And since it’s highly genetically correlated, likely also the grandparents.
Have you tried meds (stims specifically) ? That significantly increased my productive hyperfocus phases.
Keep in mind there’s a strong correlation between ASD and ADHD. So that could just be the ADHD side of things.
But it’s still far more political here…
If it ain’t broken
But it is…
I still have (or rather had) some screen-tearing somewhere. I very much have annihilated that issue with settings in X11 (though some application somewhere still has issues, be it the video player). And it just feels clunky non the less.
Although I’m currently not using Hyprland, it really feels nice to use, really flowy. I’m currently testing COSMIC (which is reasonably still in alpha, as I got issues with *** nvidia, like suspend sometimes hangs the computer).
That said, I think it’s still ok to wait until the whole ecosystem is well supported in wayland, and *** nvidia finally got their wayland shit together.
Even if you don’t take profit off the list, it could well be that he had more financial success in these other companies because he uses twitter as promotion and propaganda platform…
I’ve got news for you: basically every app I’ve used so far for lemmy has infinite scroll. Currently Thunder previously Sync.
As you’re being unkind all the time, let me be unkind as well :)
A calculator also isn’t much help, if the person operating it fucks up. Maybe the problem in your scenario isn’t the AI.
If you can effectively use AI for your problems, maybe they’re too repetitive, and actually just dumb boilerplate.
I rather like to solve problems that require actual intelligence (e.g. do research, solve math problems, think about software architecture, solve problems efficiently), and don’t even want to deal with problems that require me to write a lot of repetitive code, which AI may be (and often is not) of help with.
I have yet to see efficient generated Rust code that autovectorizes well, without a lot of allocs etc. I always get triggered by the insanely bad code-quality of the AI that just doesn’t even really understand what allocations are… Arghh I could go on…
Yes, I know, I tried all kinds of inputs, ways to query it, including full code-bases etc. Long story short: I’m faster just not caring about AI (at the moment). As I said somewhere else here, I have a theoretical background in this area. Though speaking of, I think I really need to try out training or refining a DeepSeek model with our code-bases, whether it helps to be a good alternative to something like the dumb Github Copilot (which I’ve also disabled, because it produces a looot of garbage that I don’t want to waste my attention with…) Maybe it’s now finally possible to use at least for completion when it knows details about the whole code-base (not just snapshots such as Github CoPilot).
You’re just trolling aren’t you? Have you used AI for a longer time while coding and then tried without for some time? I currently don’t miss it… Keep in mind that you still have to check whether all the code is correct etc. writing code isn’t the thing that usually takes that much time for me… It’s debugging, and finding architecturally sound and good solutions for the problem. And AI is definitely not good at that (even if you’re not that experienced).
So unreliable boilerplate generator, you need to debug?
Right I’ve seen that it’s somewhat nice to quickly generate bash scripts etc.
It can certainly generate quick’n dirty scripts as a starter. But code quality is often supbar (and often incorrect), which triggers my perfectionism to make it better, at which point I should’ve written it myself…
But I agree that it can often serve well for exploration, and sometimes you learn new stuff (if you weren’t expert in it at least, and you should always validate whether it’s correct).
But actual programming in e.g. Rust is a catastrophe with LLMs (more common languages like js work better though).
Have you actually read my text wall?
Even o1 (which AFAIK is roughly on par with R1-671B) wasn’t really helpful for me. I just need often (actually all the time) correct answers to complex problems and LLMs aren’t just capable to deliver this.
I still need to try it out whether it’s possible to train it on my/our codebase, such that it’s at least possible to use as something like Github copilot (which I also don’t use, because it just isn’t reliable enough, and too often generates bugs). Also I’m a fast typer, until the answer is there and I need to parse/read/understand the code, I already have written a better version.
DeepSeek
Yeah it’ll be exciting to see where this goes, i.e. if it really develops into a useful tool, for certain. Though I’m slightly cautious non-the less. It’s not doing something significantly different (i.e. it’s still an LLM), it’s just a lot cheaper/efficient to train, and open for everyone (which is great).
What should I expect? (I don’t do powershell, nor do I have a need for it)
confidently so in the face of overwhelming evidence
That I’d really like to see. And I mean more than the marketing bullshit that AI companies are doing…
For the record I was one of the first jumping on the AI hype-train (as programmer, and computer-scientist with machine-learning background), following the development of GPT1-4, being excited about having to do less boilerplaty code etc. getting help about rough ideas etc. GPT4 was almost so far as being a help (similar with o1 etc. or Anthropics models). Though I seldom use AI currently (and I’m observing similar with other colleagues and people I know of) because it actually slows me down with my stuff or gives wrong ideas, having to argue, just to see it yet again saturating at a local-minimum (aka it doesn’t get better, no matter what input I try). Just so that I have to do it myself… (which I should’ve done in the first place…).
Same is true for the image-generative side (i.e. first with GANs now with diffusion-based models).
I can get into more details about transformer/attention-based-models and its current plateau phase (i.e. more hardware doesn’t actually make things significantly better, it gets exponentially more expensive to make things slightly better) if you really want…
I hope that we do a breakthrough of course, that a model actually really learns reasoning, but I fear that that will take time, and it might even mean that we need different type of hardware.
It’s definitely better than say a year ago, but it’s always a new small issue. Like suspend is not working, or shutting the monitor off crashes the graphics stack etc.
I really hope they get their shit together and build a solid wayland support at some (not too distant) time. But the amount of issues is small enough for me that I’ve switched to it.