Saturday, November 09, 2013

Hellfire . . .


DEATH FROM ABOVE. GQ has an article by MatthewPower, “Confessions of a Drone Warrior” that is worth reading. “Drones” have been effective, but there's a human cost paid by the young folks who do the killing. They're good at that, and it gives them nightmares, and no wonder the Taliban have been freaking and using their political influence to get this curtailed.
By the spring of 2011, almost six years after he’d signed on, Senior Airman Brandon Bryant left the Air Force, turning down a $109,000 bonus to keep flying. He was presented with a sort of scorecard covering his squadron’s missions. “They gave me a list of achievements,” he says. “Enemies killed, enemies captured, high-value targets killed or captured, stuff like that.” He called it his diploma. He hadn’t lased the target or pulled the trigger on all of the deaths tallied, but by flying in the missions he felt he had enabled them. “The number,” he says, “made me sick to my stomach.”
Total enemies killed in action: 1,626.

3 comments:

the Keystone Garter said...

I think the threat of hackers and probably without enforced treaties, the threat of AI, robotic pilots, will necessitate military assets such as UAVs and jets with improved command and control, and access control systems. This will likely necessitate, at least until arms control treaties are laid down and enforced, quantum encryption or quantum key encryption, implemented in cockpits and airbase CAC-rooms. The cockpits will require biometric access control systems to verify the pilot isn't a robot. The length of a datalink using QKD or QE to verify to some secure control room the pilot isn't a robot, might only be about 150km before the photon degrades. This suggests frigate class CAC ships, it also suggests nested air-defenses, not one gigantic F-35 purchase. For example, the more secure airbases inland could use longer range jets to bombs the vulnerable to robots/hackers perimeter airbases assuming no air refueling.
In addition, pandemics should be taken into account. Medical equipment such as fMRIs or something as a simple as an RFID tag will verify when a pilot is a sane human. This is a start. This will require aircraft capable of being upgraded in cockpit, and capable of using a datalink that transmits and/or receives QKE or QE.

the Keystone Garter said...

...I'm wondering if my time might best be spent writing video games for VIPs. The late 80's game about the dystopic hazards of Reaganism didn't sell well.
If a robot or hacker takes control of a military vehicle, the def-com rating would change and at least sane-human biometrics would be required, to operate a fighter jet or remotely pilot a drone. Presumably some facilities can be hardened, in a TEMPEST manner except hardened from AI attacks (as opposed to Cold War intelligence agency antagonists)....things like security guards at airbases are a better investment than is the F-35. And the A-10 has a shorter supply chain in a pandemic. Sometimes these objectives conflict, but if I'm an AI I time my attack when humans are weakest. This is where the marketforce proponent in Canada and the USA are out to lunch. Eventually, mainstream industries because WMDs without channeling investment and legality (a carbon shift for the robotics industry at some point). Mind's Eye sensor networks can be timed to be implemented forcefully where technology seems about to trigger AI or hacker attacks able to "win" WWIII.
The enemy might be a large Asian country or Russia. It might be regional Islamic terrorists. It might be pro-NATO gvmt agencies becoming a tyranny threat, it might be industries of the future. Only the first two enemies are considered in purchasing F-35s. There is nothing to stop a robot from climbing in the cockpit of an F-35. I believe we should go cheap. I like the survivability and cost of the A-10s; they could survive small arms fire from a robot on the ground. I believe we should put money into designing sane-human biometric pilot verification and design quantum communications systems that can verify this locally. Photons lose coherence after 150kms or so in a fibre optic cable. If robots from one facility take control of a poison gas plant or an airbase, a pilot from 1000km away might want to bomb them but can't verify he is human to the Dept of Defense as the communications link would not be secure. Things like radiactivity of the human body, an AI could not easily replicate, not without a bioreactor and advanced biotechnology.
Mind's Eye, looking for R+D of certain types, can stop all of this before it starts, if a responsible actor to administer the surveillence can be found. All of this can be equated with variables used in mainstream economics and scenario planning. But I'd rather cure diseases and build a robotic wife than write video games....

the Keystone Garter said...

...at a bare minimum, major militaries should assume a pandemic has rendered humans presently incapable of resistance. It should time how long it takes AI/robots to take over enough military and/or industrial capital, or make from scratch, taht it can win WWIII or at least destroy us. It is hard to estimate how easily robots can attain access control to military infrastructures, and how easily they can R+D....but as it becomes easier for robots to win (right now it is impossible but the Fukushima robotic contest or Japanese nurse robots might provide "robot-zero" very shortly), we should increase the efficacy of our arms control and the responsibility of arms control actors. These baskets of human capital and technologies we want and don't want, should form the basis of central banker actions as well as political and SRI corporate funding decisions.