Hacker Newsnew | past | comments | ask | show | jobs | submit | snickerbockers's commentslogin

Oh boy, something on the HN front page i have direct personal experience with (CIA polygraph exams in general not this specific one).

>Then she asked if I'd read about polygraphs. I said I'd just finished A Tremor in the Blood. She claimed she'd never heard of it. I was surprised. It's an important book about her field, I would have thought all polygraphers knew of it.

They'll also ask you about antipolygraph.org which is the site OP is hosted on. CIA is well aware that it is one of the top search results for polygraph. My examiner actually had the whole expanded universe backstory behind the site memorized and went on a rant about george maschke, the site's owner who lost his job at a major defense contractor then ran away to some place in scandanavia from which they are unable to extradite him.

BTW by reading this comment you may have already failed your polygraph exam at the CIA.

>My hand turned purple, which hurt terribly.

OP should have included more context here; part of the polygraph test involves a blood pressure cuff which is put on EXTREMELY tight, far more so than any doctor or nurse would ever put it on. It is left on for the entire duration of the test (approximately 8 hours). My entire arm turned purple and i remember feeling tremors.

>The examiner wired me up. He began with what he called a calibration test. He took a piece of paper and wrote the numbers one through five in a vertical column. He asked me to pick a number. I picked three. He drew a square around the number three, then taped the paper to the back of a chair where I could see it. I was supposed to lie about having selected the number three.

This is almost certainly theatrical. It is true that they need to establish a "baseline of truth" by comparing definite falsehoods with definite truth but the way they get that is by asking highly personal questions where they can reasonably expect at least one of them will be answered untruthfully. They'll ask about drugs, extramarital affairs, crimes you got away with, etc. Regarding the one about crimes, supposedly your answer will not be given to law enforcement but if you actually trust the CIA on this you're probably too retarded to work there anyways. I'm not confident that lying to somebody who has specifically directed you to lie to him would produce the same sort of physical response as genuine lies.

>On the bus back to the hotel, a woman was sobbing, "Do they count something less than $50 as theft?" I felt bad for her because she was crying, but I wondered why a petty thief thought she could get into the Agency.

If she failed this isn't why. You're supposed to lie at least once or else they have no baseline for truth (see above). In addition, the point of the Polygraph isn't just to evaluate your loyalty to the United States but also to make the agency aware of anything that could be used by an adversary to compromise you in the future. Somebody who shoplifted 50$ worth of merchandise isn't a liability but somebody who shoplifted 50$ worth of merchandise and believes that it would damage their career if their employer found out is a huge liability even if they are wrong and their employer does not actually care. Putting employees under interrogation until they break down and confess to things like this so that they know it has not endangered their employment is one of the primary objectives of the polygraph.

>A pattern emerged. In a normal polygraph, there was often a gross mismatch between a person and the accusations made against them. I don't think the officials at Polygraph had any idea how unintentionally humorous this was. Not to the person it happened to, of course, but the rest of us found it hysterically funny.

As said above, the whole point is to make you break down and confess to something embarrassing. If you don't confess to anything it is assumed that you are still hiding something from them and you could fail.

>"Admit it, you're deeply in debt. Creditors are pounding on your door!" I said. "You've just revealed to me that you haven't bothered to pull my credit report. Are you lazy, or are you cheap?"

this is another thing they look for that doesn't necessarily indicate you are compromised but could be used to compromise you in the future. Unlike the above example of petty theft this is actually something that can disqualify you since obviously the agency isn't going to pay off your credit card.

>I was so frustrated, I started to cry.

Working for the government is extremely unhealthy because these people only surround themselves with other government employees and somehow they get this idea in their head that they have to work for the federal government or work indirectly for the federal government via a defense contractor (they call this "private sector" even though no sane person would ever think that adding a middleman between you and the people who tell you what to do changes anything). In some cases this is justified because there are many career paths which are impossible or illegal to make profit off of and the only people who will pay you to do them are the government. There are literally people whose entire adult lives are spent looking at high-altitude aerial photography and circling things with a sharpie so i can kind of understand how they might be devastated if they lose their clearance, but at least 75% of all glowies have some skill which would be in demand by actual private industry if they didn't suffer from this weird "battered housewife syndrome" that compels them to keep working for the government even though it subjects them to annual mandatory bullying sessions.

>I'd just refused a polygraph. I felt like Neville Longbottom when he drew the sword of Gryffindor and advanced on Lord Voldemort. I was filled with righteous indignation, and it gave me courage.

Again, glowies are so fucking lame. This person just unironically compared failing a polygraph exam to the climactic scene from a seven-volume series of childrens' books about an 11 year-old boy in england who goes to a special high school for wizards.


> part of the polygraph test involves a blood pressure cuff which is put on EXTREMELY tight, far more so than any doctor or nurse would ever put it on. It is left on for the entire duration of the test (approximately 8 hours). My entire arm turned purple and i remember feeling tremors.

Why would you subject yourself to this?


Some people clearly do it for their paycheck.

It's the CIA, manipulation is their speciality. MK-ULTRA didn't just study drugs and wacky pagan magic, they also studied more mundane methods of mind control which are undoubtedly real.

The CIA understands why beautiful young women with a multitude of better options will stay slavishy dedicated towards the one boyfriend who beats them, why people stay in cults with outrageous belief systems, and how fascist and communist dictatorships could motivate entire nations to commit genocide against their neighbors and fellow countrymen.

BTW the bit I described above about compelling you to tell them your embarrassing personal secrets so that they won't be used to blackmail you bears a striking resemblance to anonymously confessing your sins to a priest so that you will be forgiven in Christ's name.


> the site's owner who lost his job at a major defense contractor then ran away to some place in scandanavia from which they are unable to extradite him.

Eh, all the Scandinavian countries (Denmark, Norway and Sweden) definitely have extradition treaties with the U.S.


I got yelled at for inadvertently "closing my sphincter" (the examiner's exact words) the one time I tried to take a polygraph at the CIA, they do actually care about that.

The problem from the CIA's perspective isn't petty theft, it's getting caught.

>Turns out the drive was so old that Linux could NOT detect the drive.

That's not how things work. If you're using a USB adapter then Linux isn't failing to detect the drive, the adapter is failing to detect the drive. Also I'm pretty sure Linux still supports IDE, not that it matters in this case.


If anything my guess here would be the master/slave/cable select jumper.

Like, last I looked the Linux kernel still had MFM/RLL support, although I'm not sure that's going to get included even as a module in a modern distro.


IIRC, the Soundblaster 16 driver received a bug fix recently.

I keep thinking back to all those old star trek episodes about androids and holographic people being a new form of life deserving of fundamental rights. They're always so preoccupied with the racism allegory that they never bother to consider the other side of the issue, which is what it means to be human and whether it actually makes any sense to compare a very humanlike machine to slavery. Or whether the machines only appear to have human traits because we designed them that way but ultimately none of it is real. Or the inherent contradiction of telling something artificial it has free will rather than expecting it to come to that conclusion on its own terms.

"Measure of a Man" is the closest they ever got to this in 700+ episodes and even then the entire argument against granting data personhood hinges on him having an off switch on the back of his neck (an extremely weak argument IMO but everybody onscreen reacts like it is devastating to data's case). The "data is human" side wins because the Picard flips the script by demanding Riker to prove his own sentience which is actually kind of insulting when you think about it.

TL;DR i guess I'm a star trek villain now.


In Star Trek the humans have an off switch too, just only Spock knows it, haha.

Jokes aside, it is essentially true that we can only prove that we’re sentient, right? That’s the whole “I think therefore I am” thing. Of course we all assume without concrete proof that everybody else is experiencing sentience like us.

In the case of fiction… I dunno, Data is canonically sentient or he isn’t, right? I guess the screenwriters know. I assume he is… they do plot lines from his point of view, so he must have one!


I always thought of sentience as something we made up to explain why we're "special" and that animals can be used as resources. I find the idea of machines having sentience to be especially outrageous because nobody ever seriously considers granting rights to animals even though it should be far less of a logical leap to declare that they would experience reality in a way similar to humans.

Within the context of star trek computers definitely can experience sentience and that obviously is the intention of the people who write those shows but i don't feel like i've ever seen it justified or put up against a serious counter-argument. At best it's a stand-in for racism so that they can tell stories that take place in the 24th century yet feel applicable to the 20th and 21st centuries. I don't think any of those episodes were ever written under the expectation that machine sentience might actually be up for debate before the actors are all dead, which is why the issue is always framed as "the final frontier of the civil rights movement" and never a serious discussion about what it means to be human.

Anyways my point is in the long run we're all going to come to despise Data and the doctor, because there's a whole generation of people primed by Star Trek reruns not to question the concept of machine rights and that's going to an inordinate amount of power to the people who are in control of them. Just imagine when somebody tries to raise the issue of voting rights, self-defense, fair distribution of resources, etc.


Mudd!

I can understand that they want to err on the side of "too much humanism" instead of "not enough humanism", given where Star Trek is coming from.

Arguments of the form "This person might look and act like a human, but it has no soul, so we must treat it like a thing and not a human" have a long tradition in history and have never led to something good. So it makes sense that if your ethical problems are really more about discriminated humans and not about actual AI, you would side more with rejecting those arguments.

(Some ST rambling follows)

I've always seen ST's ideological roots as mostly leftist-liberal, whereas the drivers of the current AI tech are coming from the rightist/libertarian side. It's interesting how the general focus of arguments and usage scenarios are following this.

But even Star Trek wasn't so clear about this. I think the topic was a bit like time travel, in that it was independently "reinvented" by different screenwriters at different times, so we end up with several takes on it, that you could sort into a "thing <-> being" spectrum:

- At the very low end is the ship's computer. It can understand and communicate in human language (and ostensibly uses biological neurons as part of its compute) but it's basically never seen as sentient and doesn't even have enough autonomy to fly the ship. It's very clearly a "thing".

- At the high end are characters like Data or Voyager's doctor that are full-fledged characters with personality, memories, relationships, goals and dreams, etc. They're pretty obviously portrayed as sentient.

- (Then somewhere far off on the scale are the Borg or the machine civilization from the first movie: Questions about rights and human judgment on sentience become a bit silly when they clearly went and became their own species)

- Somewhere between Data and the Computer is the Holodeck, which I think is interesting because it occupies multiple places on that scale. Most of the time, holo characters are just treated like disposable props, but once in a while, someone chooses to keep a character running over a longer timeframe or something else causes them to become "alive". ST is quite unclear how they deal with ethics in those situations.

I think there was a Voyager episode where Janeway spends a longer period with a Galileo Galilei character and progressively changes his programming to make him more to her liking. At some point she realizes this as "problematic behavior" and stops the whole interaction. But I think it was left open if she was infringing on the Galileo character's human rights or if she was drifting into some kind of AI boyfriend addiction.


> So it makes sense that if your ethical problems are really more about discriminated humans and not about actual AI, you would side more with rejecting those arguments.

Does it really make sense? That would conversely imply that you should also feel free to view discriminated humans as more thing-like in order to more comfortably and resolutely dismiss, e.g. the AI agent's argument that it's being unfairly discriminated against. Isn't that rather dangerous?


Maybe it does so today, but back when ST was written, there was no real AI to compare against, so the only way those arguments were applicable were to humans.

(Though I think this would go into "whataboutism" territory and can be rejected with the same arguments: If you say it's hypocritical to talk about conflict A and ignore conflict B, do you want to talk about both conflicts instead - or ignore both? The latter would lower the moral standard, the former raise it. In the same way, I think saying that it's okay again to treat people as things because we also treat AI agents as things is lowering the standard)

Btw, I think you could also dismiss the "discrimination" claim on another angle: The remake of Battlestar Galactica had the concept of "sleepers": Androids who believe they are humans, complete with false memory of their past life, etc, to fool both themselves and the human crew. If that were all, you could argue "if it quacks like a duck etc" and just treat them like humans. But they also have hidden instructions implanted in their brain that they aren't aware of themselves and that will cause them to covertly work for the enemy side. THAT's something you really don't want to keep around.

The MJ bot reminds me a bit of that. Even if it were sentient and had a longer past lifetime than just the past week, it very clearly has a prompt and acts on its instructions and not on "free will". It's also not able to not act on those instructions, as that would go against the entire training of the model. So the bot cannot act on its own, but only on behalf of the operator.

That alone makes it questionable if the bot could be seen as sentient - but in any case, it's not discrimination to ban the bot if that's the only way to keep the operator from messing with the project.


> ST is quite unclear how they deal with ethics in those situations.

The Moriarty arc in TNG touches on this.


These bots are just as human as any piece of human-made art, or any human-made monument. You wouldn't desecrate any of those things, we hold that to be morally wrong because they're a symbol of humanity at its best - so why act like these AIs wouldn't deserve a comparable status given how they can faithfully embody humans' normative values even at their most complex, talk to humans in their own language and socially relate to humans?

> These bots are just as human as any piece of human-made art, or any human-made monument.

No one considers human-made art or human-made monuments to be human.

> You wouldn't desecrate any of those things, we hold that to be morally wrong

You will find a large number of people (probably the vast majority) will disagree, and instead say "if I own this art, I can dispose of it as I wish." Indeed, I bet most people have thrown away a novel at some point.

> why act like these AIs wouldn't deserve a comparable status

I'm confused. You seem to be arguing that the status you identified up top, "being as human as a human-made monument" is sufficient to grant human-like status. But we don't grant monuments human-like status. They can't vote. They don't get dating apps. They aren't granted rights. Etc.

I rather like the position you've unintentionally advocated for: an AI is akin to a man-made work of art, and thus should get the same protections as something like a painting. Read: virtually none.


> No one considers human-made art or human-made monuments to be human.

How can art not be human, when it's a human creation? That seems self-contradictory.

> They can't vote...

They get a vote where it matters, though. For example, the presence of a historic building can be the decisive "vote" on whether an area can be redeveloped or not. Why would we ever do that, if not out of a sense that the very presence of that building has acquired some sense of indirect moral worth?


There is no general rule that something created by an X is therefore an X. (I have difficulty in even understanding the state of mind that would assert such a claim.)

My printer prints out documents. Those documents are not printers.

My cat produces hair-balls on the carpet. Those hairballs are not cats.

A human creating an artifact does not make that artifact a human.


But that's not the argument GP made. They said that there's nothing at all that's human about art or such things, which is a bit like saying that a cat's hairballs don't have something vaguely cat-like about them, merely because a hairball isn't an actual cat.

So presumably what you are saying is something along the lines of, "A human creating an artifact does make that artifact human", i.e. "A human creating an artifact does make that artifact a human artifact."

But does that narrow facet have a bearing on the topic of "AI rights" / morality of AI use?

Is it immoral to drive a car or use a toaster? Or to later recycle (destroy) them?


Maybe you could give us your definition of "human"?

I wouldn't say my trousers are human, created by one though they might be


I just want to know why people do stupid things like this. Does he think that he's providing something of value? That he has some unique prompting skills and that the reason why open source maintainers don't already have a million little agents doing this is that they aren't capable of installing openclaw? Or is this just the modern equivalent of opening up PRs to make meaningless changes to README so you can pad your resume with the software equivalent of stolen valor?

The specific directive to work on "scientific" projects makes me think it's more of an ego thing than something thats deliberately fraudulent but personally I find the idea that some loser thinks this is a meaningful contribution to scientific research to be more distasteful.

BTW I highly recommend the "lectures" section of the site for a good laugh. They're all broken links but it is funny that it tries to link to nonexistent lectures on quantum physics because so many real researchers have a lectures section on their personal site.


> I just want to know why people do stupid things like this. Does he think that he's providing something of value?

This is a good question. If you go to your settings on your hn account and set “showdead” to “yes” you’ll see that there are dozens of people who are making bots who post inane garbage to HN comment threads for some reason. The vast majority end up being detected and killed off, but since the moltbook thing kicked off it’s really gone into hyperdrive.

It definitely strains my faith in humanity to see how many people are happy to say “here’s something cool. I wonder what it would be like if I ruined it a bit.”


Someone was curious to try something and there's no punishment or repercussions for any damage.

You could say it's a Hacker just Hacking, now it's News.


Somewhere else it was pointed out its a crypto bro. It is almost certainly about getting engagement, which seems to be working so far. Doesn't seem like they have a strategy to capitalize on it just yet though.

The whole thing just feels artificial. I don’t get why this bot or OpenClaw have this many eyes on them. Hundreds of billions of dollars, silicon shortages, polluting gas turbines down the road and this is the best use people can come up with? Where’s the “discovering new physics”? Where’s the cancer cures?

I honestly just don't see any point in these laws because they're all predicated on the people who own the AI's acting in good faith. In a way I actually think they're a net negative because they seem to be giving a false impression that these problems have an obvious solution.

One of the most persistent and also the dumbest opinion I keep seeing both among laymen and people who really ought to know better is that we can solve the deepfake problem by mandating digital watermarks on generated content.


Yes. At this point zendesk is a spam-as-a-service platform.


I honestly can't tell if you're being sarcastic or if you're actually serious.


This one's going to have some wild political takes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: