“I have always found that plans are useless, but planning is indispensable.”
– Dwight “Ike” D. Eisenhower
One of my favorite quotes from a great man. I often use it with my team to emphasize that the point of our work is not to write reports but to understand a subject.
For example, writing a project plan is a good way to understand the project and its risks personally. How might generative artificial intelligence (AI) large language models (LLMs) like ChatGPT impact this understanding?
LLMs are good at writing. I asked ChatGPT about Ike’s quote, and it responded: “Eisenhower’s quote underscores that the act of planning enhances critical thinking. It involves analyzing potential scenarios, evaluating options, and preparing for uncertainties, which sharpens problem-solving skills and strategic thinking, making individuals better equipped to handle unexpected situations.” That was pretty good and written faster than I could.
However, LLMs are not always accurate. They are designed to sound good. My go-to question for large language models is from Monty Python: “What is the average airspeed velocity of an unladen swallow?” ChatGPT used to respond that this information was not in its database. Now it responds, “The average airspeed velocity of an unladen European swallow is approximately 11 meters per second or about 24 miles per hour. This is based on observations and studies of swallow flight patterns.”
When asked to respond like Python member Graham Chapman, ChatGPT replies, “Oh, for heaven’s sake! The average airspeed velocity of an unladen swallow is about 24 miles per hour. But really, what matters is whether it’s a European or African swallow because we must maintain some semblance of scientific rigor, even amidst this absurdity!” It’s pretty close to the actual exchange from Monty Python and Holy Grail!
This is a silly example illustrating LLM improvement. There are legal questions about using copyrighted and protected information, but I expect this will be resolved. We will have highly accurate chatbots soon.
A future LLM chatbot could generate a highly accurate project plan from basic inputs like a contract, scope of work, and information from previous projects. I asked ChatGPT to list the top three risks of an LLM chatbot project. It’s response:
- Bias and Ethics: Training data bias can lead to inappropriate responses and ethical issues.
- Privacy and Security: Risks include handling sensitive data and potential security breaches.
- Scalability and Maintenance: Ensuring performance under heavy load and the need for regular updates.
Not bad. I could argue some details (it left out GPU cost, although that could be implied from scalability), but a future chatbot, particularly one with more input data, could do much better.
So, what’s the problem?
As mentioned, writing is a good method to understand a subject. The writer must structure their thoughts to communicate to another person who may not be familiar with the topic. The writer questions assumptions, conclusions, and how they might be misunderstood. They research, outline, write multiple drafts, talk to others, resolve comments, rewrite, and rewrite some more. But at the end of the process, the writer KNOWS the subject.
This process is one reason lawyers write briefs and summaries. They don’t expect anyone not involved with a particular legal proceeding to read them, but they help lawyers prepare for questions from the judge or opposing counsel.
LLMs short-circuit this process in producing a “perfect” project plan. All you must do is follow it. It’s like an actor reading a script. But do you want an actor playing the doctor to operate on you or an actual doctor? A person with an LLM-generated plan might not be able to respond to an unpredicted situation. They have not invested the time to understand the subject. Would they even recognize potential risks?
There is a middle ground: Use AI for things it is good at and people for things they are good at.
For example, LLMs are good at finding information. Instead of asking the chatbot for a project plan, the project manager could ask, “What are the typical risks for this type of project?” and use that list to evaluate risk management.
Another question could be, “What other similar projects have been done, and where are they?” There is no substitute for experience, but specific experience can be hard to find. LLMs may improve our ability to find specific projects from the past so we can inquire about their experience. We should leverage that.
LLMs are also good writers, with good grammar, spelling, and clear structure. Unfortunately, not all people can write this well. An LLM could generate a first draft that the person could edit, review, check, potentially rewrite, and then use. Alternatively, an LLM could take a long, poorly written document and edit it for clarity and length.
Would Ike have used AI? Absolutely, provided the use of AI generated better results on the battlefield. AI could have been a significant contributor if his staff had used AI to inform their planning instead of blindly generating plans. We should do the same, using AI as an aid to, not as a substitute for, critical thinking and understanding.
(NOTE: Actually, Ike did use AI. The codebreaking techniques used to crack the German Enigma codes are the basis for modern computing and numerical techniques that are now used in many AI techniques.)