

I use sphinx with Myst markdown for this, and usually plotly express to generate the js visuals. Jupyterbook looks pretty good as well
I use sphinx with Myst markdown for this, and usually plotly express to generate the js visuals. Jupyterbook looks pretty good as well
I’m with you, I drew the line at calculators though. I can do the damn sums by myself!
Relative point to point
You could say the Linux kernel is an astronomically terrible idea because it doesn’t do anything…but it is just the platform, the good comes from what people build on top of it that add all these quality of life features you miss
Buy ydy
I don’t really follow your logic, how else would you propose to shape the audio that is not “just an effect”.
Your analogy to real life does not take into account that the audio source itself is moving, so their is an extra variable outside of just stereo signal -which is what spatial audio is modelling
And your muffling example sounds a bit over simplified maybe? My understanding is that the spatial stuff is produced by phase shifting the LR signals slightly
Finally why not go further? “I don’t listen to speaker audio because it’s all just effects and mirages to sound like a real sound, what only 2^16 discrete positions the diaphragm can be in” :p
There’s is a huge difference though.
That being one is making hardware and the other is copying books into your training pipeline though
The copy occurs in the dataset preparation.
Privacy preserving federated learning is a thing - essentially you train a local model and send the weight updates back to Google rather than the data itself…but also it’s early days so who knows what vulnerabilities may exist
For me the infinity subscription bypass stopped working so I finally made the switch
Unfortunately not, here is a little kitchen sink type demo though https://myst-nb.readthedocs.io/en/latest/authoring/jupyter-notebooks.html
Myst-nb is probably the place to start looking btw - forgot to mention it in previous post