

Crunchyroll, can weebs not stay in their bedrooms privately??
Also, 2dehands, which is the biggest second hand buying / selling site and app of Belgium. Quite a big one!
Crunchyroll, can weebs not stay in their bedrooms privately??
Also, 2dehands, which is the biggest second hand buying / selling site and app of Belgium. Quite a big one!
“based on news like this”
I think you’re spot on there. Not saying that the BBC is untrustworthy, but there is always a bias in every news source. Especially when it comes to criticizing foreign policy versus local policy.
I am not disputing that it is not going well there. I’m just saying that similar issues are present in a lot of western countries. I am of course only talking about the subject of this article. If you look at how authoritarian the gouvernement is, stuff does get clearly worse in China compared to most western countries.
Don’t the amount of available jobs scale too?
I feel like this problem would be the same if China was a tenth the size, same goes for US and other countries. It’s a systemic issue, where the ratios of workers and jobs are wrong and unsocial.
You are absolutely allowed to look at a woman or make moves. Just respect other people and their boundaries.
Just move in the world, do stuff you like, meet people, be respectful to them and make connections. Don’t force stuff and respect the wishes of others.
If you do this, there is absolutely room for getting to know people better and becoming romantically involved. Just don’t be a dick.
Also, no need to be attractive as long as you’re true to yourself and are as open to others as you hope for them to be towards you.
I am am continuing to extend the life of my 6 years old laptop for as long as possible, hopefully I will be able to buy a framework when it does eventually die on me. (It’s a semi-shitty clevo model but I recently replaced the battery and it’s kind of decent again)
While technically correct, they do have it in China itself, it’s a modified version called Douyin. It is more restricted, censored and tightly controlled.
I agree that it is a cyberweapon, but don’t think that it’s only used against foreigners, they use it just as much to observe and influence their own population.
Finally, I would like to point out that to a lesser extent this is also the case for a lot of USA owned social media and tech companies. Edward Snowden’s revelations for example indicate this. While the extent of government control and influence is much larger in China, I wouldn’t underestimate the influences of Meta, Google and Microsoft for example.
Xonotic is still quite active!
I agree with this take, well formulated!
That’s a very interesting point of view, and indeed well formulated in the video!
I don’t necessarily agree with it though. I as a human being have grown up and learned from experience and the experiences of previous humans that were documented or directly communicated to me. I can see no inherent difference with an artificial intelligence learning on the same data.
I never did all the experiments, nor the research previous scientists did, but I trust their reproducibility and logical conclusions. I think on the same way, artificial intelligence could theoretically also learn these things based on previous documented findings. This would be an ideal “général intelligence” AI.
The main problem I think, is that AI needs to be even more computationally intensive and complex for it to be able to get to these advanced levels of understanding. And at this point, I see it as a fun theoretical exercise without actual practical benefit: the cost (both in money, time and energy) seems far too large to eventually create something that we can already do as humans ourselves.
The current state of LLMs is one of very basic “semblance” of understanding, and close to what you describe as probability based conversation.
I feel that AI is best at doing very specific tasks, were the problem space is small enough for it to actually learn the underlying model. In the same way I think that LLMs are best at language: rewriting text or generating stuff. What companies seem to think though is because a model is wel at producing realistic language, that it is also competent at the contents of what it is writing. And again, for that to be true, it needs a much more advanced method of calculation than is currently available.
Take this all with a grain of salt though, as I am no expert on the matter. I am an electrical engineer who no longer works in the sector due to mental issues, but with an interest in computer science.
I am by no means an AI fanboy, and I extremely dislike the fact that it is in the hands of big tech, uses so much energy and is built on the work of people who are not being rewarded in any way. It is a new technology that is being forced and abused in the most capitalist way possible.
I do think however, that what you declare here as fact is not as certain as you make it out to be. Research indicates that machine learning models do in fact form some sort of model of understanding of their problem domain. For example this research. I am all for being critical of AI, but oversimplifying the issue might not work in our favour.
While I understand where you’re coming from, I believe that it distracts from a massive positive effect that the GPL has: the way it ensures collaboration. Lots of contributors to GPL software do so in the knowledge that they are working on something great together. I myself have felt discouraged to contribute to MIT licensed software, because I know that others might just take all the hard work, make something proprietary of it and give nothing back.
I see GPL as some sort of public transaction, it is indeed more limiting than MIT and offers less pure freedom in that sense. But I just love how it uses copyright not for enforcing licensing payment for some private entity, but enforces a contribution to the community as a whole. I find this quite beautiful.
What are you trying to achieve here, “triggering” people? It just registers as infantile to me.
Thank you for taking the time to respond. With siphoning money, I mean not giving actual value in return. The NFT market was a clear example of this: get some hype going, sell the promise of great gains on your investment, once the ball gets rolling make sure you’re out before they realise it’s actually worth nothing. In the end, some smart and cunning people sucked a lot of money from often poor and misinformed small investors.
I think I have an inherent idea of value, as in: the value it has in a human life and the amount of effort needed to produce it. This has become very detached from economical value, as there you can have speculation, pumping value and all that other crap. I think that’s what frustrates me about the current financial climate: I just want to be able to pay the people who helped produce the product I buy fairly with respect to how much time and work they put it. Currently however, so much money is being transferred to people “just for having money”. The idea that money in and of itself can make more money is such a horrible perversion of the original idea of trade…
Your last paragraph is not how money should work at all. Money should represent value that ideally doesn’t change, so that the money I receive for selling a can is worth a can, not a Lambo an not a grain of sand. What your describing is closer to speculation and pyramid schemes (NFTs for example).
Either try and explain to me how BTC could be an ideal currency that fixes the problems in existing currency, or try to explain me how it’s really cool as an investment thing to siphon money from others, but don’t try and do both at the same time.
I fucking love where this went, as I was thinking the exact same responses while reading this thread! Love it when a question about gender results in fundamental ideas surrounding mathematics and the nature of reality.
If you have nextcloud and use linux: Iotas
It is really simple but suits my needs! Also looks great on the GNOME desktop.
I think the issue is not wether it’s sentient or not, it’s how much agency you give it to control stuff.
Even before the AI craze this was an issue. Imagine if you were to create an automatic turret that kills living beings on sight, you would have to make sure you add a kill switch or you yourself wouldn’t be able to turn it off anymore without getting shot.
The scary part is that the more complex and adaptive these systems become, the more difficult it can be to stop them once they are in autonomous mode. I think large language models are just another step in that complexity.
An atomic bomb doesn’t pass a Turing test, but it’s a fucking scary thing nonetheless.
I think they were joking. As in actually submitting bugs (adding bugs to the code).
I can’t really pinpoint why, but I barfed a little after seeing that video.
Killing a bunch of innocent families in his home city will have zero effect on Putin and only make their citizens more likely to keep supporting his invasion.
I understand this reaction, but please let’s not use the lives of innocents as chess pieces if it can be avoided.