As for your questions, I'm not sure if you meant for them to be answered, but I'll give it a whack:


"operation on data"
This is a vague term. In traditional computing an "operation" is something like loading a value (data) from memory into the processor (or a register in the processor). Another operation would be adding the contents of another memory location to register and another still would be storing those contents in another memory location.

At a higher level, a single operation might consist of a long string of these machine code operations. Something like:
Let a = b + c
might be 3 (or even more) machine code instructions. QC operations might be substantially different. I recall one article I read was saying that it was still an open question whether, for example, QCs might solve NP-Complete problems in polynomial time (effectively instantaneously). It's not clear to me what "operation" means in that context. If I weren't busy at the moment, I'd make time to review some current literature on the subject.

"When and how did "particles" (4th sentence) come
into this?"

Current computers represent bits of data via the states of transistors. QCs go smaller than that - to particles. I have too vague an understanding to explain any details on this.

My job requires me to go into areas where I essentially know nothing and - within a few months or years - to gain a sufficient understanding of the subject to explain it in simple terms to people who - in general - are a lot smarter and knowledgeable than I am. It's always a frustrating thing, because most of what is written presumes I know a lot more than I actually do. All I can say is that if you really want to understand something, you have to be willing to make the investment. I just sit down and start reading papers, doing net searches, writing equations, organizing thoughts. I sit down and resolve that I'm going to not get up till I understand this little sub-part. I don't learn much with each sitting, but you do this several hundred times over the course of a few months and pretty soon you almost know something important.

It would be nice if PhDs could write well. In their defense, they're usually writing to other PhDs and they tend to write (and speak) in a sort of jargonated shorthand. That just makes the value of people who can jump in, figure it out, and translate it into terms that other technical people can understand more tangible.


"What is meant by "structuring data" (4th
sentence) and how the hell do you
"represent data" (4th sentence) in a computer in
the first place, I thought computers just
"stored data"?"

Computers do more than store data. Structure means organize. You realize that data in digital computers is represented by 0s and 1s. But it's a little more complicated than that. Let's say we have a 4 bit (nibble) Imaginary Computer. There is an LED display on this computer that allows you to see the binary numeric contents of any memory location in our computer. We look and it says, "1111" and we ask "What does it mean?" Well, it means nothing - or rather, it could mean almost anything - it could be an opcode telling the processor to stop. It could be a 4 bit two-s complement integer, or it could be an unsigned integer, or it could signify an address. How can one know? How does the computer know? The only way to tell is by context. Even though I don't recall any specifics on this in regards to QCs, I get the gist of what they're saying.

The problem is not that they haven't explained well, but that they (the authors) have assumed a certain level of understanding for their readers. I don't know whether it is justified, but it seems that they have to start somewhere - do they assume that the reader doesn't know English and begin each article with a development of grammar?

I don't know that the assumptions they make are the best ones, but it seems to me that they are not unreasonable.