

Sounds like the problem is lack of enforcement of the existing laws rather than the existing laws being bad.
To provide an extreme example, just because there’s a wave of murders doesn’t mean murder should be made legal.


Sounds like the problem is lack of enforcement of the existing laws rather than the existing laws being bad.
To provide an extreme example, just because there’s a wave of murders doesn’t mean murder should be made legal.


I believe the EU Parliament has to approve this so they can block it, and that’s elected by Proportional Vote and we all have MEPs there who, unlike national parliamentarians in countries without Proportional Vote (which are most of them) have to worry more about the public opinion in their nation turning against them.
So if this shit ever makes its way to the EU Parliament (were the EU Commission will try to make it pass quietly), contact your country’s MEPs and show you’re well aware of it.


Whilst I do not agree with the spirit of the message of the previous poster, I must point out that specifically the EU Comission - from were this came - is not elected but nominated, and the nomination is one big horse trading shit show several levels removed from voters, were everybody but the head of it is chosen by the Council Of Ministers (which only represents EU National Governments , not National Parliaments) so the whole thing is maybe slightly more “democratic” than nominations for the Chinese Politburo.
(If there is one thing that needs changing in the EU political structures, it’s the crooked, rotten shit show that’s the EU Commission).
That said, the EU Parliament which can stop most of this shit, is elected and it’s even via Proportional Vote so there is no mathematical rigging at all to make some votes count more than others (unlike in First Past The Post Power Duopoly countries like the US or Britain) and hence voting in the EU Election does matter.


Looks like somebody has been promised by a one or more large Tech firms a very highly paid non-executive board membership, millionaire speech circuit engagement or gold plated “consulting” gig when their time in the Commission is over…
Mind you, by now that kind of exchange of “favours” is tradition for the members of the EU Commission.


TL;DR
QLC drives have fewer write-cycles than TLC and if their data is not refreshed periodically (which their controllers will automatically do when powered) the data in them gets corrupted faster.
In other words, under heavy write usage they will last less time and at the other end when used for long term storage of data, they need to be powered much more frequently merelly to refresh the stored states (by reading and writting back the data).
So moving to QLC in cloud application comes with mid and long terms costs in terms of power usage and, more importantly, drive end-of-life and replacement.
–
Quad Level Cell SSD technology stores 4 bits per cell - hence 16 levels - whilst TLC (Triple Level Cell) stores 3 bits - hence 8 levels - so the voltage difference between levels is half as much, and so is the margin between levels.
Everything deep down is analog, so the digital circuitry actually stores analog values on the cells at then reads them back and converts them to digital. When reading that analog value, the digital circuit has to decide to which digital value that analog value actual maps to, which it does by basically accepting any analog value within a certain range aroun the mathematically perfect value for that digital state.
(A simple example: in a 3.3V data line, when the I/O pin of a microcontroller reads the voltage it will decide for example that anything below 1.2V is a digital LOW (i.e. a zero), anything above 2.1V is a HIGH (a one) and anything in between is an erroneous value - i.e. no signal or a corrupted signal - this by the way is why if you make the line between a sender and a receiver digital chip too long, many meters, or change the signals in them too fast, hundreds of MHz+, without any special techniques to preserve signal integrity, the receiver will mainly read garbage)
So the more digital levels in a single cell the narrower the margin, the more likely that due to the natural decay over time of the stored signal or due cell damage from repeat writes, the analog value the digital circuitry reads from it be too far away from the stored digital level and be at best marked as erroneous or at worse be at a different level and thus yield a different digital value.
All this to say that QLC has less endurance (i.e. after fewer writes the damage to the cells from use causes that what is read is not the same value as what was written) and it also has less retention (i.e. if the cell is not powered, the signal decay will more quickly cause stored values to end up at a different level than when written).
Now, whilst for powered systems the retention problem is not much of an issue for cloud storage (when powered, the system automatically goes through each cell, reading its value and writting it back to refresh what’s stored there back to the mathematically perfect analog value) with just a slightly higher consumption over time for data that’s mainly read only (for flash memory, writting uses way more power than reading), the endurance problem is much worse for QLC because the cells will age twice as fast over TLC for data that is frequently written (wear-leveling exists to spreads this effect over all cells thus giving higher overall endurance, but wear-leveling is also in there for TLC so it does not improve the endurance of QLC).


Most people don’t actually know what they need until the see it, and the only ones who might are those who already have a process in place (hence know it in detail) and just want it or parts of it automated.
People often do think they know what they want, but it’s a very general and fuzzy view, with little in the way of details and which seldom considers what should happen outside the most thread path of their process (i.e. things like error situations such as “what if somebody enters the wrong data in this form” or after the fact responsibility tracing in the form of usage logs and reports).
It is actually a bit of an art to tease the details of the requirement from the stakeholders in a consistent and thorough way and also spot and get requirements for those “outside the main process path” elements and, frankly, in my career I’ve met very few people - even amongst business analysts - who are actually good at it.
That said, what maybe the main advantage of Agile when done properly (with proper use cases and the actual end users trying the implementation of those requirements out) is exactly that it’s an interactive process to refine the requirements by cycling back and forth between requirements gathering, feature development and result evaluation to fill in missing details and tease out further requirements. IMHO, this is actually were Agile shines the most when compared to Waterfall, but as I said you need to do the requirements gathering and results evaluation parts of Agile (so the parts involving interacting with actual users bot upfront in making use cases and at the end of the cycle in evaluating the fitness for what they need of what was implemented) to get those gains, and most “Agile” teams out there seem to only do the fashionable parts of Agile like the “standup meeting” which aren’t what makes it most valuable as a process.


Well, seniority helps on the deadlines front: you can spot managers trying to force too short deadlines on you a mile away and throw it back at them (“I’m am the specialist, so I’m the one who knows best how long it will take”) and if they just try and impose deadlines you can bluntly state “that isn’t possible” and if they somehow have the authority to push them you make sure everybody (especially other managers, ideally the managers above them) knows that you’ve informed them upfront that such deadlines were impossible so when it inevitably fails, said manager can’t shove the blame your way.
As for obtaining things from other teams, that’s a two part thing:
Of course, all this requires competent management since they’re the ones supposed to do it and if your managers are trying to impose deadlines on you or using slimy trickery to get people to commit to shorter deadlines, they’re NOT competent managers - that kind of shit invariably yields death marches and bug-riddled results that in the mid and long term end up wasting far more time that it was shave by those shorter deadlines.
Kinda sad that one has to play such games. Welcome to Mankind.


I would describe it as “insufficiently thinking about and researching the problems space”.
From what I’ve seen that’s very common because developers have a tendency to want to be hands-on rather than merely researching, myself include.
Even for the sake of figuring out inconsistent requirements or even just big gaps in the requirements, it’s a good idea to really think about it and cross check things.
Personally, the more I advanced in my career and the more complex and larger problems I had to tackle, the bigger the fraction of preparation time vs the fraction of coding time and I believe most very senior devs have the same experience.


I think you’re confusing doing analysis before coding with doing all analysis before coding.
If you do Agile properly (so including Use Cases with user prioritization and User feedback - so the whole system, not just doing the fashionable bits like stand up meetings and then claiming “we do Agile development”) you do analysis before development as part of evaluating how long it will take to implement the requirements contained in each Use Case. In fact this part of Agile actually pushes people to properly think through the problem - i.e. do the fucking analysis - before they start coding, just in bit-sized easy to endure blocks.
Further, in determining which Use Cases depend on which Use Cases you’re doing a form of overall, system-level analysis.
Also you definitelly need some level of overall upfront analysis no matter what: have a go at developing a mission critical high performance system on top of a gigantic dataset by taking a purist “we only look at uses cases individually and ignore the system-level overview” approach (thus, ignoring the general project technical needs that are derived from the size of the data, data integrity and performance requirements) and let me know how well it goes when half way down the project you figure out your system architecture of a single application instance with a database that can’t handle distributed transactions can’t actually deliver on those requirements.
You can refactor code and low level design in a reasonable amount of time, but refactoring system level design is a whole different story.
Of course, in my experience only a handful of shops out there do proper Agile for large projects: most just do the kiddie version - follow the herd by doing famous “agile practices” without actually understanding the process, how it all fits in it and which business and development environments is it appropriate to use in and which it is not.
I could write a fucking treatise about people thinking they’re “doing Agile” whilst in fact they’re just doing a Theatre Of Agile were all they do is play at it by acting the most famous bits.


IMHO, most people’s time usage perception tends to be heavilly skewed toward weighing higher the time taken in the main task - say, creating the code of a program - rather than the secondary (but, none the less, required before completion) tasks like fixing the code.
Notice how so many coders won’t do the proper level of analysis and preparation before actually starting coding - they want to feel like they’re “doing the work” which for them is the coding part, whilst the analysis doesn’t feel like “doing the work” for a dev, so they prematurelly dive into coding and end up screwed with things like going down a non-viable implementation route or missing in the implementation some important requirement detail with huge implications on the rest that would have been detected during analysis.
(I also think that’s the reason why even without AI people will do stupid “time savers” in the main task like using short variable names that then screw them in secondary tasks like bug-fixing or adding new requirements to the program later because it makes it far harder to figure out what the code is doing)
AI speeds up what people feel is the main task - creating the code - but that’s de facto more than offset by time lost on supposedly secondary work that doesn’t feel as much as “doing the work” so doesn’t get counted the same.
This is why when it actually gets measured independently and properly by people who aren’t just trusting their own feeling of “how long did it took” (or people who, thanks to experience, actually do properly measure the total time taken including in support activities, rather than just trusting their own subjective perception) it turns out that, at least in software development, AI actually slightly reduces productivity.


Doesn’t Windows 11 in practice require even more memory than Windows 10 to operate with decent performance?
Meanwhile my Linux gaming PC seems to actually use less memory than back when it was a Windows machine.


Receives a letter at home from Panasonic containing a message, a color printed sheet and a fridge magnet.
Message reads: “Dear costumer, please use enclosed fridge magnet to hang provided advert sheet on your Panasonic refrigerator”
This applies just as well to Israel’s increasingly violent ethno-Fascism ending up in a Genocide about to turn into a new Holocaust, as it applies to the US’ empoverishment, explosion of inequality and collapse of social mobility which together with the domination of various pillars of Democracy by the moneyed elites made the election of a Fascist Populist a certainty.
The toon on the right is obviously a US-style Liberal - the last part (“Too late now, nothing we can do”) is especially telling.
True.
That is however a pretty hard and time consuming change, so to me it makes sense that in the meanwhile we take steps to reduce the harm caused by the system still in place, not least by cracking down hard on Corruption and Conflicts Of Interest and closing the legal loopholes that allow certain politicians to stay within the Law whilst purposefully using today the power they have been delegated to do favors for others who have promised them monetary payback for it tomorrow.
If you’re drowning now you don’t put all your hopes on the ship that might be coming but isn’t even visible yet.