• 0 Posts
  • 50 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle
  • Enterprise applications are often developed by the most “quick, ship this feature” form of developers on the world. Unless the client is paying for the development a quick look at the sql table shows often unsalted passwords in a table.

    I’ve seen this in construction, medical, recruitment and other industries.

    Until cyber security requires code auditing for handling and maintaining PII as law, mostly its a “you’re fine until you get breached” approach. Even things like ACSC Australia cyber security centre, has limited guidelines. Practically worthless. At most they suggest having MFA for Web facing services. Most cyber security insurers have something but it’s also practically self reported. No proof. So if someone gets breached because someone left everyone’s passwords in a table, largely unguarded, the world becomes a worse place and the list of user names and passwords on haveibeenpwned grows.

    Edit: if a client pays and therefore has control to determine things like code auditing and security auditing etc as well as saml etc etc, then it’s something else. But say in the construction industry I’ve seen the same garbage tier software used at 12 different companies, warts and all. The developer is semi local to Australia ignoring the offshore developers…




  • I’m far from an expert sorry, but my experience is so far so good (literally wizard configured in proxmox set and forget) even during a single disk lost. Performance for vm disks was great.

    I can’t see why regular file would be any different.

    I have 3 disks, one on each host, with ceph handling 2 copies (tolerant to 1 disk loss) distributed across them. That’s practically what I think you’re after.

    I’m not sure about seeing the file system while all the hosts are all offline, but if you’ve got any one system with a valid copy online you should be able to see. I do. But my emphasis is generally get the host back online.

    I’m not 100% sure what you’re trying to do but a mix of ceph as storage remote plus something like syncthing on a endpoint to send stuff to it might work? Syncthing might just work without ceph.

    I also run zfs on an 8 disk nas that’s my primary storage with shares for my docker to send stuff, and media server to get it off. That’s just truenas scale. That way it handles data similarly. Zfs is also very good, but until scale came out, it wasn’t really possible to have the “add a compute node to expand your storage pool” which is how I want my vm hosts. Zfs scale looks way harder than ceph.

    Not sure if any of that is helpful for your case but I recommend trying something if you’ve got spare hardware, and see how it goes on dummy data, then blow it away try something else. See how it acts when you take a machine offline. When you know what you want, do a final blow away and implement it with the way you learned to do it best.


  • 3x Intel NUC 6th gen i5 (2 cores) 32gb RAM. Proxmox cluster with ceph.

    I just ignored the limitation and tried with a single sodim of 32gb once (out of a laptop) and it worked fine, but just backed to 2x16gb dimms since the limit was still 2core of CPU. Lol.

    Running that cluster 7 or so years now since I bought them new.

    I suggest only running off shit tier since three nodes gives redundancy and enough performance. I’ve run entire proof of concepts for clients off them. Dual domain controllers and FC Rd gateway broker session hosts fxlogic etc. Back when Ms only just bought that tech. Meanwhile my home “ARR” just plugs on in docker containers. Even my opnsense router is virtual running on them. Just get a proper managed switch and take in the internet onto a vlan into the guest vm on a separate virtual NIC.

    Point is, it’s still capable today.


  • I’m in Australia, generally, we have cooking instructions and microwaves that talk about wattage and time. Never duty cycle.

    Eg a sauce packet says 600w 30sec. Press power button until 600w and put it in 30 seconds.

    I know there’s duty cycles, you can hear them. I don’t know if that’s how it’s converted as a fraction of the 1500 watt maximum (40% duty cycle = 600w) but you hear it turn on and off most on the defrosting preconfigured buttons.

    Either way, I wouldn’t be surprised if it’s all just the same underneath with regional translations.


  • It’s solving a real problem in a niche case. Someone called it gimmicky, but it’s actually just a good tool currently produced by an unknown quantity. Hopefully it’ll be sorted or someone else takes up the reigns and creates an alternative that works perfectly for all my different isos.

    For the average home punter maybe even up to home lab enthusiast, probably not saving much time. For me it’s on my keyring and I use it to reload proxmox hosts, Nutanix hosts, individual Ubuntu vms running ROS Noetic and not to mention reimaging for test devices. Probably a thrice weekly thing.

    So yeah, cumulatively it’s saving me a lot of time and just in trivialising a process.

    If this was a spanner I’d just go Sidchrome or kingchrome instead of my Stanley. But it’s a bit niche so I don’t know what else allows for such simple multi iso boot. Always open to options.



  • I think you probably don’t realise you hate standards and certifications. No IT person wants yet another system generating more calls and complexity. but here is iso, or a cyber insurance policy, or NIST, or acsc asking minimums with checklists and a cyber review answering them with controls.

    Crazy that there’s so little understanding about why it’s there, that you just think it’s the “IT guy” wanting those.


  • Sr-iov works already though? That’s not needed for this. The motherboard presents the pci bus to the guest regardless of what’s plugged in. Works fine.

    This is when you want many guests to have shared graphics by partitioning a gpu. So the host still retains it and presents the graphics card to guests. You need to partition the ram up equally though, so useful only in VDI generally where you want a RTX A6000 like card to split to 10 guests each with 8gb of ram, and they share the gpu, but keep their individual video ram. Economy of scale can work out in graphics or maybe ML situations. Not so useful at home since you’ll probably have a Rtx 3080 with like 10-12gb of ram, and at most you wouldn’t want to split it below 8gb for modern games and partitions need to be equally sized. For 10g two = 2x5gb which would be a poor experience probably. Lots of frame stutters as it switches stuff between ram to video ram.

    Hope that helps. Unless this technology unlocks better partitions it’s more about opening to vdi and machine learning in a full open source context like proxmox rather than just the driver being locked behind hyperv vmware and citrix hypervisor/xen and a big yearly license. Maybe it still needs that yearly license.


  • This is possible now, but in xen or vmware you need to buy a nvidia license to unlock this feature. You can trial it for a minute in a lab but you can’t give 4 guests each 2gb of vram on your graphics card without Nvidia specialist proprietary driver on both the host and the guest.

    For vdi where you can buy 48gb rtx a6000 graphics cards, with architects (for example) each user getting each about 8gb each, you can 10 guests concurrently per card. Which at a few hundred architects scales better than buying many $5000 dollar workstations that struggle with WFH.

    For a home user, maybe being able to split for your two kids on a standard rtx 3070 with what like 8gb might be OK? Probably not though.

    Right now I have a hacky way that isn’t really supported in nvidia to split graphics cards to two guest vms but it’s neither license compatible or what I’d call “production ready”. I’d like proxmox to be able to handle this out of the box because it’s already in the kernel.

    I’ve no idea what this means with licensing though. The yearly license cost to allow you to use your driver is actually stupidly expensive. The Rtx A series cards are already dumb money.

    Either way it’s a good thing, but probably not much news for the average enthusiast









  • I knew a Datacenter that had hundreds of ps3s for rendering fluid simulation and other such things that at the time were absolutely cutting edge tech. I believe F1 and some early 3d pixar stuff was rendered on those farms. But like all things, technology marched on. fpgpas and cuda have taken that space.

    Cell definitely was heavily used by specialist/nichr industry though.

    I wonder if I can find you some link to explain it better than the rumours I heard from staff that used to work in those datacentres.

    Hmm hard to find commercial applications, probably individuals might have blogged otherwise here’s what I’m talking about: https://en.m.wikipedia.org/wiki/PlayStation_3_cluster


  • biscuitswalrus@aussie.zonetoMemes@lemmy.mlplease
    link
    fedilink
    arrow-up
    2
    ·
    9 months ago

    Ah you’re thinking I’m reading your other comments to other people.

    BTW HIPAA is for providers for their patients information handling. Once it’s in the person’s hands, it’s no longer under HIPPA and it no longer applies. If you decide to put your private medical information on a commercial advertisement board on a highway, and it’s not breaking laws to do with acceptable adcertisement (eg gore or smut) you’ll be able to do that to.

    Basically theres no expectation for a individual person to adhere to HIPPA for their own personal information storage and it doesn’t apply.

    My assumption with your lawyer comment, is this was a insurance or otherwise medical malpractice lawyer who might collect this information for their client cases, since without having client/patient requirements, HIPPA is irrelevant.