The question often comes up as a hypothetical. Suppose we added small computers to monkey's brains so that we could communicate and turn them into automated soldiers to fight in wars. When and how would we make the call to determine that they could be trusted to make correct decisions about when to use their weapons in the real world. Ultimately one has to compare them to humans in terms of reliability. Humans fail that test, too, so as long as the rate of failure is reasonable, and not a lot higher than that in humans, objections based upon failure seem to be employing a double standard. The other thing is that we'll never really know what they're going to ultimately do until we release them into the real world. Simulations can't predict the real world. We can manage the risk by incremental implementation, starting out small, and scaling up based on results, but there is no substitute for real world tests. An occasional death given the promise of the technology seems a small price to pay, IMO.
![[Image: extraordinarywoo-sig.jpg]](https://i.postimg.cc/zf86M5L7/extraordinarywoo-sig.jpg)