Judging from how the DoD currently buys software, lots of money will be spent, many headlines will be written, awards will be handed out, and zero software will make it on to user workstations. End users will continue to use Excel for everything.
tonyhart7 6 hours ago [-]
200 mil is chump change for them, if prototype turned to be good then good for them but if its not then they are not worry
TZubiri 4 hours ago [-]
Not all software is made public and used in workstations, especially not in military
If the physical disconnect between killing a person (e.g. UAVs) wasn't enough to make that task easier then further offloading the decision of who to target might help.
golergka 2 minutes ago [-]
Physical connect means that the person who is making the decision to kill is scared for their life. Physical disconnect means he's only scared for a piece of equipment.
Guess which one of those is more trigger happy.
JumpCrisscross 38 minutes ago [-]
> If the physical disconnect between killing a person (e.g. UAVs) wasn't enough to make that task easier then further offloading the decision of who to target might help
The physical disconnect hypothesis isn't really borne out by the lack of concern for collateral damage in pre-firearm warfare, when killing was mostly done face to face, compared to today.
Waterluvian 4 hours ago [-]
“Let’s take another whack at real-time object identification built into night vision goggles.”
(Made-up but plausible example)
tough 3 hours ago [-]
just giving the whole DoD chatgpt that's deployed in their servers would be pretty useful i guess for them?
submeta 4 minutes ago [-]
So DoD will use OpenAI to write tweets bashing "the enemies of the empire"? They realise that Tucker Carlson and the likes are turning against forever wars, so they must deploy other tactics.
First Palantir used against US citizens. Now this.
beezlebroxxxxxx 43 minutes ago [-]
Despite what every AI exec will say publicly, I'm pretty sure they're salivating at the prospect of war/defense related applications of AI. There's just too much money floating around in the military industrial complex for them to ignore. This is doubly so if the "business" part of your AI company is about as solid as a fart in the wind.
9 minutes ago [-]
bryanrasmussen 15 minutes ago [-]
Stop shooting at me, damn it, I'm Sam Altman!
Of course, that was an error on my part. I should only be shooting at other people and actually not in the part of the city at all, it's definitely a mistake on my part and I will rectify immediately. Thank you again for pointing it out to me!
You're still shooting at me!
GoatInGrey 44 minutes ago [-]
$200M is very small when it comes to the world of US defense. Combined with this being formally labeled as a pilot, this can be safely ignored until they reach IOC.
Though what this signals is a change in strategic direction regarding autonomous capability. While they won't be rigging an LLM onto a drone, there are many cyber and administrative problem spaces that exist in defense that AI products could meaningfully address.
Aeolun 24 minutes ago [-]
> While they won't be rigging an LLM onto a drone
You say that very confidently, but I’m extremely skeptical of that being an actual limit.
optimalsolver 6 minutes ago [-]
>IOC
Immediate or cancel?
darqis 8 minutes ago [-]
Yes, teach the machines how to kill life, whatever could go wrong...
pyuser583 6 hours ago [-]
I heard one thing AI is very good at declassifying documents.
42 minutes ago [-]
Avicebron 7 hours ago [-]
Let's hope before they wire it directly to the controls "because speed" they've trained it on Stanislav Petrov up down and backwards..
> On 26 September 1983, three weeks after the Soviet military had shot down Korean Air Lines Flight 007, Petrov was the duty officer at the command center for the Oko nuclear early-warning system when the system reported that a missile had been launched from the United States, followed by up to four more. Petrov judged the reports to be a false alarm.
upghost 8 hours ago [-]
Does anyone have any idea what the DoD could possibly want from OpenAI? Less accurate/more sycophantic missiles?
notesinthefield 7 hours ago [-]
Some of the more popular models (NIPRGPT, the various DREN models) are “soft banned” and DoD is in need of a unified solution. MSFT’s GCC HIGH and GovCloud implementations have been slow to materialize. But more to your point - everyone is using LLM’s to pick up the slack from layoffs. Im sitting in meetings and watching my gov customers generate documentation and proposals everyday. Everything the commercial world uses AI for the US gov is doing the same. Cant directly speak to targeting but you can bet your ass there are 100 different offensive projects trying to integrate AI into ISR work and the like.
pests 3 hours ago [-]
Planatir has an older demo of their chat like interface showcasing targeting selection, battle plans and formations, other advice. Kind of creepy, I assume it’s much more capable now.
greenavocado 2 hours ago [-]
Palantir is the poster child for a global panopticon
munificent 6 hours ago [-]
1. Secretary of Defense feels like bombing some place. Asks aide to write a report on, justification, logistics, and consequences.
2. Aide tells subordinate to write report.
3. Subordinate uses ChatGPT to write the 100-page report. Sends it to aide.
4. Aide uses ChatGPT to summarize report. Sends summary to SecDef.
5. SecDef accidentally posts summary on publicly-accessible social media page, then forwards to President.
6. Bombs go boom.
ginkgotree 7 hours ago [-]
Yeah, tons. SIGNT / HUMINT analysis. After action report summaries. war gaming to optimize deterrence. human machine teaming. LLM-in-the-loop for warfighters. rapid code gen in field deployments for units to spin up software solutions. The list is endless, imho.
felixgallo 5 hours ago [-]
llm-in-the-loop for whatever a 'warfighter' is is basically the opposite of how fighting wars should go.
kube-system 5 hours ago [-]
The DoD does plenty of things beyond putting boots on the ground. They’re the world’s largest employer. They have all the same boring problems that any employer has at gigantic scale.
ginkgotree 3 hours ago [-]
Yep, pretty much.
ginkgotree 3 hours ago [-]
why? it could help them asses threats, civilians / avoid collateral damage. Like any weapon or technology, it depends on its use. warfighter is the modern industry / academic term used for "soldier."
ringeryless 2 hours ago [-]
"help"
(botch the job)
somenameforme 4 hours ago [-]
Automatically generated, native sounding, propaganda at scale - capable of interacting in real time. This was always the MIC money endgame for LLMs. This is also probably why they are enlisting tech execs from Meta, OpenAI, etc.
bcrosby95 2 hours ago [-]
I look forward to our senators "living" to 100+.
3 hours ago [-]
gilgoomesh 7 hours ago [-]
ChatGPT, do you know where the General left his keys?
impulser_ 7 hours ago [-]
You will be surprise how much work at the DoD has nothing to do with weapons.
> “This contract, with a $200 million ceiling, will bring OpenAI’s industry-leading expertise to help the Defense Department identify and prototype how frontier AI can transform its administrative operations, from improving how service members and their families get health care, to streamlining how they look at program and acquisition data, to supporting proactive cyber defense,”
Translated - they'll hand out GPT access to a bunch of service members and administrators. Except the UI will have a big DoD logo and words like "SECURE" and "CLASSIFIED" will be displayed on it a few dozen times.
01100011 8 hours ago [-]
You realize that the DoD has a huge amount of normal business work like logistics, project management, people management, benefits management, etc? Right?
dmd 6 hours ago [-]
The United States Military (Waterhouse has decided) is first and foremost an unfathomable network of typists and file clerks, secondarily a stupendous mechanism for moving stuff from one part of the world to another, and last and least a fighting organization. —Cryptonomicon
rkagerer 7 hours ago [-]
I suspect it's more than that.
“Under this award, the performer will develop prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains,” the Defense Department said.
notesinthefield 7 hours ago [-]
“National security challenges” is incredibly broad, providing the right size of boots to USCG rescue swimmers could be considered a national security challenge.
koakuma-chan 6 hours ago [-]
it says _critical_
XorNot 17 minutes ago [-]
Trenchfoot was a substantial source of casualties in WW1, and looking after your feet is a top priority for every military force in the field.
kube-system 5 hours ago [-]
Ain’t nothing more critical than rescue!
guywithahat 7 hours ago [-]
Knowing the DoD, I bet it's not. I bet they just want their own secure servers or some sort of corporate data/encryption management, and they're willing to pay out the nose to not have to use asksage or some terrible DoD friendly clone
piyushpr134 4 hours ago [-]
An on premise deployment ?
an0malous 7 hours ago [-]
I would guess it’s for mass surveillance. Even just the ability to extract names and entities from audio, video, and text on every piece of public media would be useful.
MOARDONGZPLZ 7 hours ago [-]
DOD doesn’t really do this
an0malous 6 hours ago [-]
Maybe they’d like to start
stonogo 7 hours ago [-]
Only because they currently contract it out to Palantir (at least the bits that NSA isn't handling)
zmgsabst 4 hours ago [-]
NSA is a DOD organization.
> The National Security Agency (NSA) is an intelligence agency of the United States Department of Defense, under the authority of the director of national intelligence (DNI).
> William J. Hartman is a United States Army lieutenant general who has served as the acting commander of United States Cyber Command, director of the National Security Agency,
They’re staffed by military people (alongside civilians) and their commander is always military — because much of what they do (abroad) could be construed as acts of war.
jasonfrost 4 hours ago [-]
Easy PT plans
LightBug1 7 hours ago [-]
One AI per person ...
m3kw9 7 hours ago [-]
Sycophantic missiles would be desirable
bpodgursky 3 hours ago [-]
You guys have no idea how many DoD man-hours are spent on jobs like
"add up all the item counts in the inventory report and send a weekly email"
Yes maybe OpenAI is developing killer drones or maybe (imo likely) it's licensing a FedRAMP complaint AI for normal business work.
muglug 2 hours ago [-]
You don’t need AI to complain about FedRAMP
bpodgursky 2 hours ago [-]
Technically I can still edit that post but now I think it's better this way.
6 hours ago [-]
d--b 3 hours ago [-]
So much for humanity’s greater good Sam.
loandbehold 3 hours ago [-]
Depending on your political views it may be good if it helps USA keeping its military edge over China and preventing China from invading Taiwan.
vasco 1 hours ago [-]
There's invasions going on right now that aren't being prevented, no need for theoretical ones.
ringeryless 2 hours ago [-]
said capabilities Hegseth is utterly gutting and undermining.
It's more likely China's next gen aircraft one should be wary of, than their AI.
(as previewed in recent Indian Pakistani air engagements)
i really see this so-called AI race as a bullet to be dodged; a bubble to be waited out.
it has been relentlessly pushed from on top, and we always find really pushy FOMO as the main driver.
I'm not impressed by non deterministic mechanisms that undo the zero overhead advantages hard won by decades of automation.
this is not a CAD tool amplifying and articulating human intentions, but a vague floppy jelly blob of "i wonder what will come out"
tehjoker 2 hours ago [-]
Why do you even care about Taiwan?
rvz 7 hours ago [-]
Isn't this part of the true definition of "AGI" and its all for the benefit of humanity?
Or is it that are we finally realizing that we are getting scammed again on these so-called promises and it was all a grift.
Maybe we should just wake up.
trhway 3 hours ago [-]
On the way to benefit all humanity MS helped Sam back then, and now MS will get to wake up to the real Sam :)
“OpenAI executives have considered accusing Microsoft, the company's major backer, of anticompetitive behavior in their partnership …
OpenAI's effort could involve seeking a federal regulatory review of the terms of its contract with Microsoft for potential violations of antitrust law, as well as a public campaign,…“
lyu07282 4 hours ago [-]
People are practically irrelevant infants at this point. We are about to repeat the Iraq war, point by point with universal agreement. The same people in charge are recycling the same propaganda, selling the same lies to in many cases quite literally the same people again and it's working, so I don't know why you are expecting anyone to ever "wake up".
This, this is why I have such an issue with the amount of taxes I pay
Not because I’m anti social programs the way people like to immediately assume, but because of dumb shit like this that I have no control over
kube-system 5 hours ago [-]
Honestly, why do you think it is dumb?
I think it is pretty well established that LLMs can be a great time saver when used appropriately. Why wouldn’t you want that productivity gain at the government level?
_def 5 hours ago [-]
Reading and writing reports when peoples lives are on the line is arguably a hot topic, no?
kube-system 5 hours ago [-]
One would imagine that a $200m contract would come with at least some minimal amounts of guidance on best practices. The DoD is not a spring chicken with it comes to automation. They’ve been a perennial early adopter.
ringeryless 2 hours ago [-]
and LLMs are the opposite of automation, the opposite of a human intention amplifier like CAD CAM, or chef puppet ansible terraform whatever, aka non deterministic
more_corn 7 hours ago [-]
This gives me a sick feeling of unease.
bluealienpie 4 hours ago [-]
That's the rational response.
eastbound 2 hours ago [-]
OpenAI was supposed to be open; After making it a private company, it will become governmental & defense.
Good luck to Elon Musk for his trial for the open-source-ness of the organization.
layoric 7 hours ago [-]
That should shore up their financials given their.. checks notes $12B in operational costs. /s
Hope it's worth it.
throw234234234 2 hours ago [-]
My view is that it isn't really entirely about economics anymore at least on a traditional cost/benefit analysis basis. It is seen as a way to disrupt industries. Think of it more like war with arms race dynamics (winner takes all), or consolidation of power to capital over labor. Even if it is a net negative you need to play to stay in the game even if it disrupts your own revenue (e.g. Google) else lose entirely.
I suspect the capital class would throw good money after bad to make AI viable especially since a lot of the costs are fixed in nature (i.e. in training runs, not per query).
bix6 6 hours ago [-]
$10B run rate now so they can just plug the gap with $2B in ads!?! Hot DoD singles near you! Would you like me to generate an image of their stealth package ;) ?
dluan 7 hours ago [-]
directly hooking up the AI to the nuclear button is which chapter of the dont build the torment nexus book
fabfoe 6 hours ago [-]
Isn’t that the Department of Energy that does that, not DoD?
kevingadd 15 minutes ago [-]
DoD would be involved in actual deployment of nukes, I would expect.
4 hours ago [-]
add-sub-mul-div 7 hours ago [-]
The epilogue.
mckirk 7 hours ago [-]
The last published draft of the epilogue.
lovich 7 hours ago [-]
[flagged]
6 hours ago [-]
okdood64 6 hours ago [-]
[flagged]
blooalien 6 hours ago [-]
> No one would do that.
Y'know though, there's quite a lot of really stupid things being done by humanity's so-called "leaders" right now (industry and gov't both) that saner folk thought no one would ever do. Sadly, sanity is not the norm these days among those thinking they're "large and in charge"...
ruined 6 hours ago [-]
an llm can never be made to suffer
therefore an llm must never exercise strike authority
The writing and acting is superb and the same goes for the sets and camera work. Come to think of it, the only thing I dislike (and greatly so) is the trailer as it to me profoundly fails to communicate the atmosphere of the movie.
GartzenDeHaes 5 hours ago [-]
Gen. Ripper is getting some validation now that fluoride is being banned in some places.
If the physical disconnect between killing a person (e.g. UAVs) wasn't enough to make that task easier then further offloading the decision of who to target might help.
Guess which one of those is more trigger happy.
The physical disconnect hypothesis isn't really borne out by the lack of concern for collateral damage in pre-firearm warfare, when killing was mostly done face to face, compared to today.
(Made-up but plausible example)
First Palantir used against US citizens. Now this.
Of course, that was an error on my part. I should only be shooting at other people and actually not in the part of the city at all, it's definitely a mistake on my part and I will rectify immediately. Thank you again for pointing it out to me!
You're still shooting at me!
Though what this signals is a change in strategic direction regarding autonomous capability. While they won't be rigging an LLM onto a drone, there are many cyber and administrative problem spaces that exist in defense that AI products could meaningfully address.
You say that very confidently, but I’m extremely skeptical of that being an actual limit.
Immediate or cancel?
> On 26 September 1983, three weeks after the Soviet military had shot down Korean Air Lines Flight 007, Petrov was the duty officer at the command center for the Oko nuclear early-warning system when the system reported that a missile had been launched from the United States, followed by up to four more. Petrov judged the reports to be a false alarm.
2. Aide tells subordinate to write report.
3. Subordinate uses ChatGPT to write the 100-page report. Sends it to aide.
4. Aide uses ChatGPT to summarize report. Sends summary to SecDef.
5. SecDef accidentally posts summary on publicly-accessible social media page, then forwards to President.
6. Bombs go boom.
Translated - they'll hand out GPT access to a bunch of service members and administrators. Except the UI will have a big DoD logo and words like "SECURE" and "CLASSIFIED" will be displayed on it a few dozen times.
“Under this award, the performer will develop prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains,” the Defense Department said.
> The National Security Agency (NSA) is an intelligence agency of the United States Department of Defense, under the authority of the director of national intelligence (DNI).
https://en.wikipedia.org/wiki/National_Security_Agency
> William J. Hartman is a United States Army lieutenant general who has served as the acting commander of United States Cyber Command, director of the National Security Agency,
https://en.wikipedia.org/wiki/William_J._Hartman
They’re staffed by military people (alongside civilians) and their commander is always military — because much of what they do (abroad) could be construed as acts of war.
"add up all the item counts in the inventory report and send a weekly email"
Yes maybe OpenAI is developing killer drones or maybe (imo likely) it's licensing a FedRAMP complaint AI for normal business work.
It's more likely China's next gen aircraft one should be wary of, than their AI. (as previewed in recent Indian Pakistani air engagements)
i really see this so-called AI race as a bullet to be dodged; a bubble to be waited out. it has been relentlessly pushed from on top, and we always find really pushy FOMO as the main driver.
I'm not impressed by non deterministic mechanisms that undo the zero overhead advantages hard won by decades of automation. this is not a CAD tool amplifying and articulating human intentions, but a vague floppy jelly blob of "i wonder what will come out"
Or is it that are we finally realizing that we are getting scammed again on these so-called promises and it was all a grift.
Maybe we should just wake up.
https://www.reuters.com/sustainability/boards-policy-regulat...
“OpenAI executives have considered accusing Microsoft, the company's major backer, of anticompetitive behavior in their partnership …
OpenAI's effort could involve seeking a federal regulatory review of the terms of its contract with Microsoft for potential violations of antitrust law, as well as a public campaign,…“
Not because I’m anti social programs the way people like to immediately assume, but because of dumb shit like this that I have no control over
I think it is pretty well established that LLMs can be a great time saver when used appropriately. Why wouldn’t you want that productivity gain at the government level?
Good luck to Elon Musk for his trial for the open-source-ness of the organization.
Hope it's worth it.
I suspect the capital class would throw good money after bad to make AI viable especially since a lot of the costs are fixed in nature (i.e. in training runs, not per query).
Y'know though, there's quite a lot of really stupid things being done by humanity's so-called "leaders" right now (industry and gov't both) that saner folk thought no one would ever do. Sadly, sanity is not the norm these days among those thinking they're "large and in charge"...
therefore an llm must never exercise strike authority
https://en.wikipedia.org/wiki/Dr._Strangelove
The writing and acting is superb and the same goes for the sets and camera work. Come to think of it, the only thing I dislike (and greatly so) is the trailer as it to me profoundly fails to communicate the atmosphere of the movie.