Showing posts with label Prompt Engineering. Show all posts
Showing posts with label Prompt Engineering. Show all posts

Saturday, September 20, 2025

Can AI Be Your Paralegal? (Only if You Follow This 5-Step Verification Process)

A legal professional works on a laptop, symbolizing the intersection of law and AI technology.

 

Blogging_CS · · 10 min read

Generative AI promises to revolutionize the speed of legal research, but a critical pitfall lies hidden beneath the surface: “AI hallucinations.” Because AI can fabricate non-existent case law that looks authentic, legal professionals are now facing the paradox of spending more time verifying AI outputs than it would have taken to draft the work themselves.

This isn’t a hypothetical concern. In Mata v. Avianca, a case in the Southern District of New York, attorneys faced sanctions for submitting a brief containing fake judicial opinions generated by AI. Even more striking is Noland v. Land, where the California Court of Appeal sanctioned an attorney for filing a brief in which 21 of 23 case citations were complete fabrications. The penalty was severe: a $10,000 fine, mandatory notification to the client, and a report to the state bar.

These rulings send a clear message: before any discussion of technology, the user’s attitude and responsibility are paramount. Attorneys (including patent attorneys) have a fundamental, non-delegable duty to read and verify every citation in documents submitted to the court, regardless of the source. With the risk of AI hallucinations now widely known, claiming ignorance—“I didn’t know the AI could make things up”—is no longer a viable excuse. Ultimately, the final line of defense is a mindset of professional skepticism: question every AI output and cross-reference every legal basis with its original source.


A 5-Step Practical Workflow for Risk Management

Apply the following five-step workflow to all AI-assisted tasks to systematically manage risk.

  1. Step 1: Define the Task & Select Trusted Data

    Set a clear objective for the AI and personally select the most reliable source materials (e.g., recent case law, statutes, internal documents). Remember that the “Garbage In, Garbage Out” principle applies from the very beginning.

  2. Step 2: Draft with RAG (Retrieval-Augmented Generation)

    Generate the initial draft based on your selected materials. RAG is the most effective anti-hallucination technique, as it forces the AI to base its answers on a trusted external data source you provide, rather than its vast, internal training data.

    Use Case:

    • Drafting an Initial Case Memo: Upload relevant case law, articles, and factual documents to a tool like Google's NotebookLM or Claude. Then, instruct it: “Using only the uploaded documents, summarize the court's criteria for ‘Issue A’ and outline the arguments favorable to our case.” This allows for the rapid creation of a reliable initial memo.
  3. Step 3: Expand Research with Citation-Enabled Tools

    To strengthen or challenge the initial draft's logic, use AI tools that provide source links to broaden your perspective.

    Recommended Tools:

    • Perplexity, Skywork AI: Useful for initial research as they provide source links alongside answers.
    • Gemini's Deep Research feature: Capable of comprehensive analysis on complex legal issues with citations.

    Pitfall:

    • Source Unreliability: The AI may link to personal blogs or irrelevant content. An AI-provided citation is not a verified fact; it must be checked manually.
  4. Step 4: Cross-Verify with Multiple AIs & Refine with Advanced Prompts

    Critically review the output by posing the same question to two or more AIs (e.g., ChatGPT, Gemini, Claude) and enhance the quality of the results through sophisticated prompt engineering.

    Key Prompting Techniques:

    • Assign a Role: “You are a U.S. patent attorney with 15 years of experience specializing in the semiconductor field.”
    • Demand Chain-of-Thought Reasoning: “Think step-by-step to reach your conclusion.”
    • Instruct it to Admit Ignorance: “If you do not know the answer, state that you could not find the information rather than guessing.”
  5. Step 5: Final Human Verification - The Most Critical Step

    You must personally check every sentence, every citation, and every legal argument generated by the AI against its original source. To skip this step is to abdicate your professional duty.


Advanced Strategies & Firm-Level Policy

Beyond the daily workflow, firms should establish a policy framework to ensure stability and trust in their use of AI.

  • Establish a Multi-Layered Defense Framework: Consider a formal defense-in-depth approach: (Base Layer) Sophisticated prompts → (Structural Layer) RAG for grounding → (Behavioral Layer) Fine-tuning for specialization. Fine-tuning, using tools like ChatGPT's GPTs or Gemini for Enterprise, can train an AI on your firm's past work to enhance accuracy for specific tasks, but requires careful consideration of cost, overfitting, and confidentiality risks.
  • Implement a Confidence-Based Escalation System: Design an internal system that scores the AI's confidence in its responses. If a score falls below a set threshold (e.g., 85%), the output could be automatically flagged for mandatory human review, creating a secondary safety net.
  • Establish Principles for Billing and Client Notification: AI subscription fees should be treated as overhead, not directly billed to clients. Bill for the professional value created by using AI (e.g., deeper analysis, better strategy), not for the “machine’s time.” Include a general disclosure clause in engagement letters stating that the firm may use secure AI tools to improve efficiency, thereby ensuring transparency with clients.

Conclusion: Final Accountability and the Path Forward

The core of the AI hallucination problem ultimately lies in the professional’s verification mindset. The technologies and workflows discussed today are merely tools. As courts and bar associations have repeatedly warned, the final responsibility rests with the human professional.

“AI is a tool; accountability remains human.”

Only by establishing this principle and combining multi-layered verification strategies with a commitment to direct validation can we use AI safely and effectively. When we invest the time saved by AI into deeper legal analysis and more creative strategy, we evolve into true legal experts of the AI era. AI will not replace you, but the responsibility for documents bearing your name rests solely with you.

Frequently Asked Questions

Q: Can I trust the content if the AI provides a source link?
A: Absolutely not. A source link provided by an AI is merely a claim of where it got the information, not a guarantee of accuracy. The AI can misinterpret or distort the source's content. You must click the link, read the original text, and verify that it has been cited correctly and in context.
Q: What is the safest way to use AI with confidential client information?
A: The default should be to use an enterprise-grade, secure AI service contracted by your firm or a private, on-premise LLM. If you must use a public AI, you are required to completely anonymize all identifying information from your queries. Uploading sensitive data to a public AI service is a serious ethical and security violation.
Q: What is the most common mistake legal professionals make when using AI?
A: Skipping Step 5 of the workflow: “Final Human Verification.” Seeing a well-written, plausible-sounding sentence and copy-pasting it without checking the original source is the easiest way to fall into the hallucination trap, with potentially severe consequences.

Saturday, September 6, 2025

Patentability of LLM Prompts: Overcoming Abstract Idea Rejections

 

Can a Simple Command to an AI Be Patented? This article provides an in-depth analysis of how LLM prompt techniques can transcend mere ‘ideas’ to be recognized as concrete ‘technical inventions,’ exploring key strategies and legal standards across different countries.

It seems that almost no one around us thinks of protecting prompt techniques or prompts that instruct LLM models with patents. At first, I was also skeptical, wondering, ‘Can a simple command to a computer be patented?’ However, as I delved deeper into this topic, I came to the conclusion that it is entirely possible if certain conditions are met. This article is a summary of the thought process I went through, and please bear in mind that it may not yet be an academically established view. ๐Ÿ˜Š

 

๐Ÿค” Prompts Aren’t Patentable Because They’re Just ‘Human Thoughts,’ Right?

The first hurdle that comes to mind for many is the principle that ‘human mental processes’ are not patentable subject matter. In fact, the argument that “a prompt is fundamentally human involvement, and technology involving such human mental activity is not patentable” is one of the strongest reasons for rejection in patent examination. This standard has been particularly firm since the U.S. Supreme Court’s Alice Corp. v. CLS Bank decision. It means that merely implementing something on a computer that a person could do in their head is not enough to get a patent.

According to this logic, the act of instructing an AI through a prompt is ultimately an expression of human thought, so one could easily conclude that it cannot be patented. However, this argument is half right and half wrong. And this is precisely where our patent strategy begins.

๐Ÿ’ก Good to Know!
What patent law takes issue with as ‘human intervention’ is not the act of giving a command to a system itself. It refers to cases where the core idea of the invention remains at the level of a mental step that can be practically performed in the human mind. Therefore, the key is to prove that our prompt technology transcends this boundary.

 

๐Ÿ“Š A Shift in Perspective: From ‘Command’ to ‘Computer Control Technology’

The first step to unlocking the patentability of prompt technology is to change our perspective. We need to redefine our technology not as ‘a message sent from a human to an AI,’ but as ’a technology that controls the internal computational processes of a complex computer system (LLM) through structured data to solve a technical problem and achieve concrete performance improvements.’

If you take a close look at the algorithm of China’s DeepSeek-R1, you can see that it implements various prompt techniques as they are.

Think about it. The process of assigning a specific expert role to an LLM with billions of parameters, injecting complex library dependency information as context, and combining numerous constraints to control the generation of optimal code is clearly in a realm that ‘cannot practically be performed in the human mind.’ This is a crucial standard for recognizing patent eligibility in the guidelines and case law of the U.S. Patent and Trademark Office (USPTO).

 

๐ŸŒ A Comparative Look at Key Examination Standards of Major Patent Offices

The patentability of prompt technology is not assessed uniformly across all countries. If you are considering international filing, it is crucial to understand the subtle differences in perspective among major patent offices.

1. USPTO (United States Patent and Trademark Office) – Emphasis on the Abstract Idea Exception

The USPTO strictly applies the Alice/Mayo two-step test, which originated from Supreme Court case law. Instructions or general linguistic expressions that merely replace human thought processes can be dismissed as “abstract ideas.” However, if it can be demonstrated that the prompt is linked to a concrete technical implementation (e.g., improving model accuracy, optimizing specific hardware operations), there is a chance of it being recognized as patent-eligible subject matter.

2. EPO (European Patent Office) – Focus on Technical Effect

The EPO assesses based on “technical character” and “technical effect.” Simply presenting data input or linguistic rules is considered to lack inventive step, but if the prompt structure serves as a means to solve a technical problem (e.g., improving computational efficiency, optimizing memory usage, enhancing interaction with a specific device), it can be recognized as patent-eligible.

3. KIPO (Korean Intellectual Property Office) – Emphasis on Substantive Requirements for Software Inventions

KIPO places importance on the traditional requirement of “a creation of a technical idea utilizing the laws of nature.” Therefore, a prompt as a mere sentence or linguistic rule is not considered a technical idea, but if it is shown to be combined with a specific algorithm, hardware, or system to produce a concrete technical result, it can be recognized as an invention. In Korean practice, presenting a concrete system structure or processing flow is particularly persuasive.

Key Comparison Summary

Patent Office Key Requirement
USPTO (U.S.) Emphasis on ‘concrete technical implementation’ to avoid the abstract idea exception
EPO (Europe) Proof of ‘technical effect’ is key; simple data manipulation is insufficient
KIPO (Korea) Must be a technical idea using laws of nature + emphasis on systemic/structural implementation
⚠️ Implications for International Filing
The same “LLM prompt” technology could be at risk of being dismissed as an “abstract business method” in the United States, a “non-technical linguistic rule” in Europe, and a “mere idea” in Korea. Therefore, when considering international filing, a strategy that clearly articulates the ‘concrete system architecture’ and ‘measurable technical effects’ throughout the specification is essential as a common denominator.

 

๐Ÿงฎ A Practical Guide to Drafting Patent Claims (Detailed)

So, how should you draft patent claims to avoid the ‘human intervention’ attack and clearly establish that it is a ‘technical invention’? Let’s take a closer look at four key strategies.

1. Set the subject as the ‘computer (processor),’ not the ‘person.’

This is the most crucial step in shifting the focus of the invention from the ‘user’s mental activity’ to the ‘machine’s technical operation.’ It must be specified that all steps of the claim are performed by computer hardware (processor, memory, etc.).

  • Bad ๐Ÿ‘Ž: A method where a user specifies a persona to an LLM and generates code.
  • Good ๐Ÿ‘: A step where a processor, upon receiving a user’s input, assigns a professional persona for a specific programming language to the LLM.

2. Specify the prompt as ‘structured data.’

Instead of abstract expressions like ‘natural language prompt,’ you need to clarify that it is a concrete data structure processed by the computer. This shows that the invention is not just a simple idea.

  • Bad ๐Ÿ‘Ž: A step of providing a natural language prompt to the LLM.
  • Good ๐Ÿ‘: A step of generating and providing to the LLM a machine-readable context schema that includes library names and version constraints.

3. Claim ‘system performance improvement,’ not the result.

Instead of subjective results like ‘good code,’ you must specify objective and measurable effects that substantially improve the computer’s functionality. This is the core of ‘technical effect.’

  • Bad ๐Ÿ‘Ž: A step of generating optimized code.
  • Good ๐Ÿ‘: A step of controlling the LLM’s token generation probability through the schema to generate optimized code that reduces code compatibility errors and saves GPU memory usage.

4. Clarify the ‘automation’ process.

It should be specified that all processes after the initial input (data structuring, LLM control, result generation, etc.) are performed Automatically by the system without further human judgment, demonstrating that it is a reproducible technical process.

 

๐Ÿ“œ Reinforced Claim Example

By integrating all the strategies described above, you can construct a reinforced patent claim as follows.

[Claim] A computer-implemented method for generating optimized code, comprising:

  1. (a) parsing, by a processor, a user’s natural language input to generate a persona identifier defining an expert role for a specific programming language;
  2. (b) generating, by the processor, by referencing said input and an external code repository, structured context data including library names, version constraints, and hardware memory usage limits;
  3. (c) generating, by the processor, a control prompt including said persona identifier and structured context data and transmitting it to an LLM, thereby automatically controlling the internal token generation process of the LLM;
  4. (d) receiving, from said controlled LLM, optimized code that satisfies said constraints and has a compilation error rate below a predefined threshold and reduced GPU memory usage.

→ This example, instead of focusing on a simple result, greatly increases the chances of patent registration by clarifying system-level measurable technical effects such as ‘reduced compilation error rate’ and ‘reduced GPU memory usage.’

 

Frequently Asked Questions ❓

Q: Can a simple prompt like "write a poem about a cat" be patented?
A: No, that in itself is just an idea and would be difficult to patent. The subject of a patent would be a technical method or system that uses a prompt with a specific data structure (e.g., a schema defining poetic devices, rhyme schemes) to control an LLM to generate a poem, resulting in less computational resource usage or more accurate generation of a specific style of poetry.
Q: What are some specific ‘technical effects’ of prompt technology?
A: Typical examples include reduced compilation error rates in code generation, savings in computational resources like GPU and memory, shorter response generation times, and improved output accuracy for specific data formats (JSON, XML, etc.). The important thing is that these effects must be measurable and reproducible.
Q: Do I need to draft claims differently for each country when filing internationally?
A: Yes, while the core strategy is the same, it is advantageous to tailor the emphasis to the points that each patent office values. For example, in a U.S. (USPTO) specification, you would emphasize the ‘concrete improvement of computer functionality,’ in Europe (EPO), the ‘technical effect through solving a technical problem,’ and in Korea (KIPO), the ‘concreteness of the system configuration and processing flow.’

In conclusion, there is a clear path to protecting AI prompts with patents. However, it requires a strategic approach that goes beyond the idea of ‘what to ask’ and clearly demonstrates ‘how to technically control and improve a computer system.’ I hope this article provides a small clue to turning your innovative ideas into powerful intellectual property. If you have any more questions, feel free to ask in the comments~ ๐Ÿ˜Š

※ This blog post is intended for general informational purposes only and does not constitute legal advice on any specific matter. For individual legal issues, please consult a qualified professional.

Wednesday, September 3, 2025

LLM-Powered Patent Search from A to Z: From Basic Prompts to Advanced Strategy

 

Still Stumped by Patent Searches with LLMs? This post breaks down how to use the latest AI Large Language Models (LLMs) to maximize the accuracy and efficiency of your patent searches, including specific model selection methods and advanced ‘deep research’ prompting techniques.

Hi there! Have you ever spent days, or even weeks, lost in a sea of patent documents, trying to find that one piece of information you need? I’ve definitely been there. The anxiety of wondering, ‘Is my idea truly novel?’ can keep you up at night. But thanks to the latest Large Language Models (LLMs), the whole paradigm of patent searching is changing. It’s even possible for an AI to conduct its own ‘deep research’ by diving into multiple sources. Today, I’m going to share some practical examples of ‘prompt engineering’ that I’ve learned firsthand to help you unlock 200% of your LLM’s potential!

Prompt Engineering Tricks to Boost Accuracy by 200%

Choosing the right AI model is important, but the success of your patent search ultimately depends on how you ask your questions. That’s where ‘prompt engineering’ comes in. It’s the key to making the AI accurately grasp your intent and deliver the best possible results. Let’s dive into some real-world examples.

Heads Up!
LLMs are not perfect. They can sometimes confidently present false information, a phenomenon known as ‘hallucination.’ It’s crucial to get into the habit of cross-referencing any patent numbers or critical details the AI provides with an official database.

 

1. Using Chain-of-Thought for Step-by-Step Reasoning

When you have a complex analysis task, asking the AI to ‘show its work’ by thinking step-by-step can reduce logical errors and improve accuracy.

Prompt Example:
Analyze the validity of a patent for an ‘autonomous driving technology that fuses camera and LiDAR sensor data’ by following these steps.

Step 1: Define the core technical components (camera, LiDAR, data fusion).
Step 2: Based on the defined components, generate 5 sets of search keywords for the USPTO database.
Step 3: From the search results, select the 3 most similar prior art patents.
Step 4: Compare the key claims of the selected patents with our technology, and provide your final opinion on the patentability of our tech.

 

2. Using Real-Time External Information (RAG & ReAct)

LLMs only know information up to their last training date. To get the latest patent data, you need to instruct them to search external databases in real-time.

Prompt Example:
You are a patent analyst. Using your search tool, find all patent publications on KIPRIS related to ‘Quantum Dot Displays’ published since January 1, 2024.

1. Organize the list of patents by application number, title of invention, and applicant.
2. Summarize the overall technology trends and analyze the core technical focus of the top 3 applicants.
3. Based on your analysis, predict which technologies in this field are likely to be promising over the next two years.

 

3. Activating the “Deep Research” Function

The latest LLMs can do more than just a single search. They have ‘deep research’ capabilities that can synthesize information from multiple websites, academic papers, and technical documents to create a comprehensive report, much like a human researcher.

Prompt Example:
Activate your deep research function. Write an in-depth report on the global R&D trends for ‘next-generation semiconductor materials using Graphene.’ The report must include the following:

1. The main challenges of the current technology and the latest research trends aimed at solving them (reference and summarize at least 3 reputable academic papers or tech articles).
2. An analysis of the top 5 companies and research institutions leading this field and their key patent portfolios.
3. The expected technology development roadmap and market outlook for the next 5 years.
4. Clearly cite the source (URL) for all information referenced in the report.

 

4. Exploring Multiple Paths (Tree of Thoughts)

This is useful for solving strategic problems with no single right answer, like designing around a patent or charting a new R&D direction. You have the AI explore and evaluate multiple possible scenarios.

Prompt Example:
Propose three new design concepts for a ‘secondary battery electrode structure’ that do not infringe on claim 1 of U.S. Patent ‘US 1234567 B2’.

1. For each design, clearly explain which elements of the original patent were changed and how.
2. Evaluate the technical advantages, expected performance, and potential drawbacks of each design.
3. Select the design you believe has the highest likelihood of avoiding infringement and achieving commercial success, and provide a detailed argument for your choice.

๐Ÿ’ก Pro Tip!
The common thread in all great prompts is that they give the AI a clear ‘role,’ explain the ‘context,’ and demand a ‘specific output format.’ Just remembering these three things will dramatically improve your results.
๐Ÿ’ก

LLM Patent Search: Key Takeaways

Assign a Role: Give the AI a specific expert role, like “You are a patent attorney.”
Step-by-Step Thinking: For complex analyses, instruct the AI to use step-by-step reasoning (CoT) to improve logical accuracy.
Advanced Strategies:
Use Deep Research and Tree of Thoughts to generate expert-level reports.
Cross-Verification is a Must: Always be aware of AI hallucinations and verify important information against original sources.

Frequently Asked Questions

Q: Is the ‘deep research’ function available on all LLMs?
A: No, not yet. It’s more of an advanced feature typically found in the latest premium versions of LLMs like Perplexity, Gemini, and ChatGPT. However, you can mimic a similar effect by using the standard search function and asking questions in multiple, sequential steps.
Q: Can I trust the search results from an LLM 100%?
A: No, you absolutely cannot. An LLM is a powerful assistant, not a substitute for a qualified expert’s final judgment. Due to hallucinations, it can invent patent numbers or misrepresent content. It is essential to always verify its findings against the original documents and have them reviewed by a professional.
Q: Prompt engineering seems complicated. Where should I start?
A: An easy way to start is by modifying the examples shown today. Just applying three techniques—’assigning a role,’ ‘specifying the format,’ and ‘requesting step-by-step thinking’—will dramatically improve the quality of your results.

Patent searching is no longer the tedious, uphill battle it once was. How you wield the powerful tool of LLMs can change the speed of your R&D and business. I hope you’ll use the tips I’ve shared today to create smarter innovations with AI. If you have any more questions, feel free to ask in the comments!

Tuesday, September 2, 2025

๋งˆํ‹ด ํŒŒ์šธ๋Ÿฌ๊ฐ€ ๋งํ•˜๋Š” LLM๊ณผ ์†Œํ”„ํŠธ์›จ์–ด ๊ฐœ๋ฐœ์˜ ๋ฏธ๋ž˜: 'ํ™˜๊ฐ'์€ ๊ฒฐํ•จ์ด ์•„๋‹ˆ๋‹ค? Martin Fowler on the Future of LLM and Software Development: Is 'Hallucination' Not a Flaw?

 

LLM์˜ 'ํ™˜๊ฐ'์ด ๊ฒฐํ•จ์ด ์•„๋‹ˆ๋ผ๊ณ ?
Is LLM's 'Hallucination' Not a Flaw?

์„ธ๊ณ„์ ์ธ ์†Œํ”„ํŠธ์›จ์–ด ๊ฐœ๋ฐœ ์‚ฌ์ƒ๊ฐ€ ๋งˆํ‹ด ํŒŒ์šธ๋Ÿฌ๊ฐ€ ์ œ์‹œํ•˜๋Š” LLM ์‹œ๋Œ€์˜ ๊ฐœ๋ฐœ ํŒจ๋Ÿฌ๋‹ค์ž„! ๊ทธ์˜ ๋‚ ์นด๋กœ์šด ํ†ต์ฐฐ์„ ํ†ตํ•ด '๋น„๊ฒฐ์ •์„ฑ'๊ณผ ์ƒˆ๋กœ์šด ๋ณด์•ˆ ์œ„ํ˜‘ ๋“ฑ ๊ฐœ๋ฐœ์ž๊ฐ€ ๋งˆ์ฃผํ•  ๋ฏธ๋ž˜๋ฅผ ๋ฏธ๋ฆฌ ํ™•์ธํ•ด ๋ณด์„ธ์š”.
The development paradigm for the LLM era presented by world-renowned software development thinker Martin Fowler! Get a preview of the future developers will face, including 'non-determinism' and new security threats, through his sharp insights.

์•ˆ๋…•ํ•˜์„ธ์š”! ์š”์ฆ˜ ๋„ˆ๋‚˜ ํ•  ๊ฒƒ ์—†์ด AI, ํŠนํžˆ LLM(๋Œ€๊ทœ๋ชจ ์–ธ์–ด ๋ชจ๋ธ)์„ ์—…๋ฌด์— ํ™œ์šฉํ•˜๊ณ  ์žˆ์ฃ . ์ฝ”๋“œ๋ฅผ ์งœ๊ฒŒ ํ•˜๊ฑฐ๋‚˜, ์•„์ด๋””์–ด๋ฅผ ์–ป๊ฑฐ๋‚˜, ์‹ฌ์ง€์–ด๋Š” ๋ณต์žกํ•œ ๊ฐœ๋…์„ ์„ค๋ช…ํ•ด๋‹ฌ๋ผ๊ณ  ํ•˜๊ธฐ๋„ ํ•˜๊ณ ์š”. ์ € ์—ญ์‹œ LLM์˜ ํŽธ๋ฆฌํ•จ์— ํ‘น ๋น ์ ธ ์ง€๋‚ด๊ณ  ์žˆ๋Š”๋ฐ์š”, ๋ฌธ๋“ ์ด๋Ÿฐ ์ƒ๊ฐ์ด ๋“ค๋”๋ผ๊ณ ์š”. '๊ณผ์—ฐ ์šฐ๋ฆฌ๋Š” ์ด ๋„๊ตฌ๋ฅผ ์ œ๋Œ€๋กœ ์ดํ•ดํ•˜๊ณ  ์‚ฌ์šฉํ•˜๊ณ  ์žˆ๋Š” ๊ฑธ๊นŒ?'
Hello! Nowadays, everyone is using AI, especially LLMs (Large Language Models), for work. We make them write code, get ideas, or even ask them to explain complex concepts. I'm also deeply immersed in the convenience of LLMs, but a thought suddenly struck me: 'Are we truly understanding and using this tool correctly?'

์ด๋Ÿฐ ๊ณ ๋ฏผ์˜ ์™€์ค‘์— ์†Œํ”„ํŠธ์›จ์–ด ๊ฐœ๋ฐœ ๋ถ„์•ผ์˜ ์„ธ๊ณ„์ ์ธ ๊ตฌ๋ฃจ, ๋งˆํ‹ด ํŒŒ์šธ๋Ÿฌ(Martin Fowler)๊ฐ€ ์ตœ๊ทผ LLM๊ณผ ์†Œํ”„ํŠธ์›จ์–ด ๊ฐœ๋ฐœ์— ๋Œ€ํ•œ ์ƒ๊ฐ์„ ์ •๋ฆฌํ•œ ๊ธ€์„ ์ฝ๊ฒŒ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋‹จ์ˆœํžˆ 'LLM์€ ๋Œ€๋‹จํ•ด!' ์ˆ˜์ค€์„ ๋„˜์–ด, ๊ทธ ๋ณธ์งˆ์ ์ธ ํŠน์„ฑ๊ณผ ์šฐ๋ฆฌ๊ฐ€ ์•ž์œผ๋กœ ๋งˆ์ฃผํ•˜๊ฒŒ ๋  ๋ณ€ํ™”์— ๋Œ€ํ•œ ๊นŠ์ด ์žˆ๋Š” ํ†ต์ฐฐ์ด ๋‹ด๊ฒจ ์žˆ์—ˆ์ฃ . ์˜ค๋Š˜์€ ์—ฌ๋Ÿฌ๋ถ„๊ณผ ํ•จ๊ป˜ ๊ทธ์˜ ์ƒ๊ฐ์„ ๋”ฐ๋ผ๊ฐ€ ๋ณด๋ ค๊ณ  ํ•ฉ๋‹ˆ๋‹ค. ๐Ÿ˜Š
While pondering this, I came across an article by Martin Fowler, a world-renowned guru in the software development field, who recently summarized his thoughts on LLMs and software development. It went beyond a simple 'LLMs are amazing!' level, offering deep insights into their fundamental nature and the changes we will face. Today, I'd like to explore his thoughts with you. ๐Ÿ˜Š

LLM and Software Development

 

๋งˆํ‹ด ํŒŒ์šธ๋Ÿฌ, LLM์˜ ํ˜„์ฃผ์†Œ๋ฅผ ๋งํ•˜๋‹ค ๐Ÿค”
Martin Fowler on the Current State of LLMs ๐Ÿค”

๋งˆํ‹ด ํŒŒ์šธ๋Ÿฌ๋Š” ๋จผ์ € ํ˜„์žฌ AI ์‚ฐ์—…์ด ๋ช…๋ฐฑํ•œ '๋ฒ„๋ธ”' ์ƒํƒœ์— ์žˆ๋‹ค๊ณ  ์ง„๋‹จํ•ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์—ญ์‚ฌ์ ์œผ๋กœ ๋ชจ๋“  ๊ธฐ์ˆ  ํ˜์‹ ์ด ๊ทธ๋ž˜์™”๋“ฏ, ๋ฒ„๋ธ”์ด ๊บผ์ง„ ํ›„์—๋„ ์•„๋งˆ์กด์ฒ˜๋Ÿผ ์‚ด์•„๋‚จ์•„ ์ƒˆ๋กœ์šด ์‹œ๋Œ€๋ฅผ ์—ฌ๋Š” ๊ธฐ์—…์ด ๋‚˜ํƒ€๋‚  ๊ฒƒ์ด๋ผ๊ณ  ๋ดค์–ด์š”. ์ค‘์š”ํ•œ ๊ฑด, ์ง€๊ธˆ ๋‹จ๊ณ„์—์„œ๋Š” ํ”„๋กœ๊ทธ๋ž˜๋ฐ์˜ ๋ฏธ๋ž˜๋‚˜ ํŠน์ • ์ง์—…์˜ ์•ˆ์ •์„ฑ์— ๋Œ€ํ•ด ๋ˆ„๊ตฌ๋„ ํ™•์‹คํžˆ ์•Œ ์ˆ˜ ์—†๋‹ค๋Š” ์ ์ž…๋‹ˆ๋‹ค.
Martin Fowler first diagnoses the current AI industry as being in a clear 'bubble' state. However, as with all technological innovations historically, he believes that even after the bubble bursts, companies like Amazon will survive and usher in a new era. The important thing is that at this stage, no one can be certain about the future of programming or the job security of specific professions.

๊ทธ๋ž˜์„œ ๊ทธ๋Š” ์„ฃ๋ถ€๋ฅธ ์˜ˆ์ธก๋ณด๋‹ค๋Š” ๊ฐ์ž LLM์„ ์ง์ ‘ ์‚ฌ์šฉํ•ด๋ณด๊ณ , ๊ทธ ๊ฒฝํ—˜์„ ์ ๊ทน์ ์œผ๋กœ ๊ณต์œ ํ•˜๋Š” ์‹คํ—˜์ ์ธ ์ž์„ธ๊ฐ€ ์ค‘์š”ํ•˜๋‹ค๊ณ  ๊ฐ•์กฐํ•ฉ๋‹ˆ๋‹ค. ์šฐ๋ฆฌ ๋ชจ๋‘๊ฐ€ ์ƒˆ๋กœ์šด ๋„๊ตฌ๋ฅผ ํƒํ—˜ํ•˜๋Š” ๊ฐœ์ฒ™์ž๊ฐ€ ๋˜์–ด์•ผ ํ•œ๋‹ค๋Š” ์˜๋ฏธ๊ฒ ์ฃ ?
Therefore, he emphasizes that an experimental attitude of personally using LLMs and actively sharing those experiences is more important than making hasty predictions. This implies that we all need to become pioneers exploring this new tool, right?

๐Ÿ’ก ์•Œ์•„๋‘์„ธ์š”!
๐Ÿ’ก Good to know!

ํŒŒ์šธ๋Ÿฌ๋Š” ์ตœ๊ทผ LLM ํ™œ์šฉ์— ๋Œ€ํ•œ ์„ค๋ฌธ์กฐ์‚ฌ๋“ค์ด ์‹ค์ œ ์‚ฌ์šฉ ํ๋ฆ„์„ ์ œ๋Œ€๋กœ ๋ฐ˜์˜ํ•˜์ง€ ๋ชปํ•  ์ˆ˜ ์žˆ๋‹ค๊ณ  ์ง€์ ํ–ˆ์–ด์š”. ๋‹ค์–‘ํ•œ ๋ชจ๋ธ์˜ ๊ธฐ๋Šฅ ์ฐจ์ด๋„ ํฌ๊ธฐ ๋•Œ๋ฌธ์—, ๋‹ค๋ฅธ ์‚ฌ๋žŒ์˜ ์˜๊ฒฌ๋ณด๋‹ค๋Š” ์ž์‹ ์˜ ์ง์ ‘์ ์ธ ๊ฒฝํ—˜์„ ๋ฏฟ๋Š” ๊ฒƒ์ด ๋” ์ค‘์š”ํ•ด ๋ณด์ž…๋‹ˆ๋‹ค.
Fowler pointed out that recent surveys on LLM usage may not accurately reflect actual usage patterns. Since there are also significant differences in the capabilities of various models, it seems more important to trust your own direct experience rather than the opinions of others.

 

LLM์˜ ํ™˜๊ฐ: ๊ฒฐํ•จ์ด ์•„๋‹Œ ๋ณธ์งˆ์  ํŠน์ง• ๐Ÿง 
LLM Hallucination: An Intrinsic Feature, Not a Flaw ๐Ÿง 

์ด๋ฒˆ ๊ธ€์—์„œ ๊ฐ€์žฅ ํฅ๋ฏธ๋กœ์› ๋˜ ๋ถ€๋ถ„์ž…๋‹ˆ๋‹ค. ํŒŒ์šธ๋Ÿฌ๋Š” LLM์ด ์‚ฌ์‹ค์ด ์•„๋‹Œ ์ •๋ณด๋ฅผ ๊ทธ๋Ÿด๋“ฏํ•˜๊ฒŒ ๋งŒ๋“ค์–ด๋‚ด๋Š” 'ํ™˜๊ฐ(Hallucination)' ํ˜„์ƒ์„ ๋‹จ์ˆœํ•œ '๊ฒฐํ•จ'์ด ์•„๋‹ˆ๋ผ '๋ณธ์งˆ์ ์ธ ํŠน์„ฑ'์œผ๋กœ ๋ด์•ผ ํ•œ๋‹ค๊ณ  ์ฃผ์žฅํ•ฉ๋‹ˆ๋‹ค. ์ •๋ง ์ถฉ๊ฒฉ์ ์ด์ง€ ์•Š๋‚˜์š”? LLM์€ ๊ฒฐ๊ตญ '์œ ์šฉ์„ฑ์ด ์žˆ๋Š” ํ™˜๊ฐ์„ ์ƒ์„ฑํ•˜๊ธฐ ์œ„ํ•œ ๋„๊ตฌ'๋ผ๋Š” ๊ด€์ ์ž…๋‹ˆ๋‹ค.
This was the most interesting part of the article for me. Fowler argues that the 'hallucination' phenomenon, where LLMs create plausible but untrue information, should be seen as an 'intrinsic feature' rather than a mere 'flaw'. Isn't that shocking? The perspective is that LLMs are ultimately 'tools for generating useful hallucinations'.

์ด๋Ÿฐ ๊ด€์ ์—์„œ ๋ณด๋ฉด, ์šฐ๋ฆฌ๋Š” LLM์˜ ๋‹ต๋ณ€์„ ๋งน๋ชฉ์ ์œผ๋กœ ์‹ ๋ขฐํ•ด์„œ๋Š” ์•ˆ ๋ฉ๋‹ˆ๋‹ค. ์˜คํžˆ๋ ค ๋™์ผํ•œ ์งˆ๋ฌธ์„ ์—ฌ๋Ÿฌ ๋ฒˆ, ํ‘œํ˜„์„ ๋ฐ”๊ฟ”๊ฐ€๋ฉฐ ๋˜์ ธ๋ณด๊ณ  ๋‹ต๋ณ€์˜ ์ผ๊ด€์„ฑ์„ ํ™•์ธํ•˜๋Š” ์ž‘์—…์ด ํ•„์ˆ˜์ ์ž…๋‹ˆ๋‹ค. ํŠนํžˆ ์ˆซ์ž ๊ณ„์‚ฐ๊ณผ ๊ฐ™์ด ๊ฒฐ์ •์ ์ธ ๋‹ต์ด ํ•„์š”ํ•œ ๋ฌธ์ œ์— LLM์„ ์ง์ ‘ ์‚ฌ์šฉํ•˜๋ ค๋Š” ์‹œ๋„๋Š” ์ ์ ˆํ•˜์ง€ ์•Š๋‹ค๊ณ  ๋ง๋ถ™์˜€์Šต๋‹ˆ๋‹ค.
From this viewpoint, we should not blindly trust the answers from LLMs. Instead, it is essential to ask the same question multiple times with different phrasing to check for consistency in the answers. He added that attempting to use LLMs directly for problems requiring definitive answers, such as numerical calculations, is not appropriate.

⚠️ ์ฃผ์˜ํ•˜์„ธ์š”!
⚠️ Be careful!

ํŒŒ์šธ๋Ÿฌ๋Š” LLM์„ '์ฃผ๋‹ˆ์–ด ๊ฐœ๋ฐœ์ž'์— ๋น„์œ ํ•˜๋Š” ๊ฒƒ์— ๊ฐ•ํ•˜๊ฒŒ ๋น„ํŒํ•ฉ๋‹ˆ๋‹ค. LLM์€ "๋ชจ๋“  ํ…Œ์ŠคํŠธ ํ†ต๊ณผ!"๋ผ๊ณ  ์ž์‹  ์žˆ๊ฒŒ ๋งํ•˜๋ฉด์„œ ์‹ค์ œ๋กœ๋Š” ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํŒจ์‹œํ‚ค๋Š” ์ฝ”๋“œ๋ฅผ ๋‚ด๋†“๋Š” ๊ฒฝ์šฐ๊ฐ€ ํ”ํ•˜์ฃ . ๋งŒ์•ฝ ์ธ๊ฐ„ ๋™๋ฃŒ๊ฐ€ ์ด๋Ÿฐ ํ–‰๋™์„ ๋ฐ˜๋ณตํ•œ๋‹ค๋ฉด, ์‹ ๋ขฐ๋ฅผ ์žƒ๊ณ  ์ธ์‚ฌ ๋ฌธ์ œ๋กœ ์ด์–ด์งˆ ์ˆ˜์ค€์˜ ์‹ฌ๊ฐํ•œ ๊ฒฐํ•จ์ด๋ผ๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. LLM์€ ๋™๋ฃŒ๊ฐ€ ์•„๋‹Œ, ๊ฐ•๋ ฅํ•˜์ง€๋งŒ ์‹ค์ˆ˜๋ฅผ ์ €์ง€๋ฅผ ์ˆ˜ ์žˆ๋Š” '๋„๊ตฌ'๋กœ ์ธ์‹ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.
Fowler strongly criticizes the analogy of an LLM to a 'junior developer'. LLMs often confidently state "All tests passed!" while providing code that actually fails tests. If a human colleague were to do this repeatedly, it would be a serious flaw leading to a loss of trust and personnel issues. LLMs should be recognized not as colleagues, but as powerful 'tools' that can make mistakes.

 

์†Œํ”„ํŠธ์›จ์–ด ๊ณตํ•™, '๋น„๊ฒฐ์ •์„ฑ' ์‹œ๋Œ€๋กœ์˜ ์ „ํ™˜ ๐ŸŽฒ
Software Engineering's Shift to an Era of 'Non-Determinism' ๐ŸŽฒ

์ „ํ†ต์ ์ธ ์†Œํ”„ํŠธ์›จ์–ด ๊ณตํ•™์€ '๊ฒฐ์ •๋ก ์ '์ธ ์„ธ๊ณ„ ์œ„์— ์„ธ์›Œ์ ธ ์žˆ์—ˆ์Šต๋‹ˆ๋‹ค. '2+2'๋ฅผ ์ž…๋ ฅํ•˜๋ฉด '4'๊ฐ€ ๋‚˜์™€์•ผ ํ•˜๋“ฏ, ๋ชจ๋“  ๊ฒƒ์€ ์˜ˆ์ธก ๊ฐ€๋Šฅํ•˜๊ณ  ์ผ๊ด€์ ์ด์–ด์•ผ ํ–ˆ์ฃ . ์˜ˆ์ƒ๊ณผ ๋‹ค๋ฅธ ๊ฒฐ๊ณผ๋Š” '๋ฒ„๊ทธ'๋กœ ์ทจ๊ธ‰๋˜์–ด ์ฆ‰์‹œ ์ˆ˜์ •๋˜์—ˆ์Šต๋‹ˆ๋‹ค.
Traditional software engineering was built on a 'deterministic' world. Just as inputting '2+2' must yield '4', everything had to be predictable and consistent. Unexpected results were treated as 'bugs' and fixed immediately.

ํ•˜์ง€๋งŒ LLM์˜ ๋“ฑ์žฅ์€ ์ด๋Ÿฌํ•œ ํŒจ๋Ÿฌ๋‹ค์ž„์„ ๊ทผ๋ณธ์ ์œผ๋กœ ๋ฐ”๊พธ๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ํŒŒ์šธ๋Ÿฌ๋Š” LLM์ด ์†Œํ”„ํŠธ์›จ์–ด ๊ณตํ•™์— '๋น„๊ฒฐ์ •์„ฑ(Non-Determinism)'์„ ๋„์ž…ํ•˜๋Š” ์ „ํ™˜์ ์ด ๋  ๊ฒƒ์ด๋ผ๊ณ  ์ง„๋‹จํ•ฉ๋‹ˆ๋‹ค. ๋™์ผํ•œ ์š”์ฒญ์—๋„ LLM์€ ๋ฏธ๋ฌ˜ํ•˜๊ฒŒ ๋‹ค๋ฅธ ๊ฒฐ๊ณผ๋ฌผ์„ ๋‚ด๋†“์„ ์ˆ˜ ์žˆ์œผ๋ฉฐ, ๊ทธ๋Ÿด๋“ฏํ•ด ๋ณด์ด๋Š” ์ฝ”๋“œ ์•ˆ์— ์น˜๋ช…์ ์ธ ์˜ค๋ฅ˜๋ฅผ ์ˆจ๊ฒจ๋†“๊ธฐ๋„ ํ•ฉ๋‹ˆ๋‹ค.
However, the emergence of LLMs is fundamentally changing this paradigm. Fowler diagnoses that LLMs will be a turning point, introducing 'Non-Determinism' into software engineering. Even with the same request, an LLM can produce subtly different outputs and may hide critical errors within plausible-looking code.

์ด์ œ ๊ฐœ๋ฐœ์ž์˜ ์—ญํ• ์€ ๋‹จ์ˆœํžˆ ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•˜๋Š” ๊ฒƒ์„ ๋„˜์–ด, LLM์ด ๋งŒ๋“ค์–ด๋‚ธ ๋ถˆํ™•์‹คํ•œ ๊ฒฐ๊ณผ๋ฌผ์„ ๋น„ํŒ์ ์œผ๋กœ ๊ฒ€์ฆํ•˜๊ณ  ๊ด€๋ฆฌํ•˜๋Š” ๋Šฅ๋ ฅ์ด ๋”์šฑ ์ค‘์š”ํ•ด์กŒ์Šต๋‹ˆ๋‹ค. ์•„๋ž˜ ํ‘œ๋กœ ๊ทธ ์ฐจ์ด๋ฅผ ๊ฐ„๋‹จํžˆ ์ •๋ฆฌํ•ด๋ดค์Šต๋‹ˆ๋‹ค.
Now, the role of a developer has become more about the ability to critically verify and manage the uncertain outputs generated by LLMs, going beyond simply writing code. I've summarized the differences in the table below.

๊ตฌ๋ถ„
Category
์ „ํ†ต์  ์†Œํ”„ํŠธ์›จ์–ด (๊ฒฐ์ •์ )
Traditional Software (Deterministic)
LLM ๊ธฐ๋ฐ˜ ์†Œํ”„ํŠธ์›จ์–ด (๋น„๊ฒฐ์ •์ )
LLM-based Software (Non-deterministic)
๊ฒฐ๊ณผ ์˜ˆ์ธก์„ฑ
Result Predictability
๋™์ผ ์ž…๋ ฅ, ๋™์ผ ๊ฒฐ๊ณผ ๋ณด์žฅ
Same input, same output guaranteed
๋™์ผ ์ž…๋ ฅ์—๋„ ๋‹ค๋ฅธ ๊ฒฐ๊ณผ ๊ฐ€๋Šฅ
Different outputs possible for the same input
์˜ค๋ฅ˜์˜ ์ •์˜
Definition of Error
์˜ˆ์ธก์„ ๋ฒ—์–ด๋‚œ ๋ชจ๋“  ๋™์ž‘ (๋ฒ„๊ทธ)
Any behavior deviating from prediction (Bug)
๊ฒฐ๊ณผ์˜ ๋ถˆํ™•์‹ค์„ฑ (๋ณธ์งˆ์  ํŠน์„ฑ)
Uncertainty of results (Intrinsic feature)
๊ฐœ๋ฐœ์ž ์—ญํ• 
Developer's Role
์ •ํ™•ํ•œ ๋กœ์ง ๊ตฌํ˜„ ๋ฐ ๋””๋ฒ„๊น…
Implementing precise logic and debugging
๊ฒฐ๊ณผ๋ฌผ ๊ฒ€์ฆ ๋ฐ ๋ถˆํ™•์‹ค์„ฑ ๊ด€๋ฆฌ
Verifying outputs and managing uncertainty

 

ํ”ผํ•  ์ˆ˜ ์—†๋Š” ์œ„ํ˜‘: ๋ณด์•ˆ ๋ฌธ์ œ ๐Ÿ”
The Unavoidable Threat: Security Issues ๐Ÿ”

๋งˆ์ง€๋ง‰์œผ๋กœ ํŒŒ์šธ๋Ÿฌ๋Š” LLM์ด ์†Œํ”„ํŠธ์›จ์–ด ์‹œ์Šคํ…œ์˜ ๊ณต๊ฒฉ ํ‘œ๋ฉด์„ ๊ด‘๋ฒ”์œ„ํ•˜๊ฒŒ ํ™•๋Œ€ํ•œ๋‹ค๋Š” ์‹ฌ๊ฐํ•œ ๊ฒฝ๊ณ ๋ฅผ ๋˜์ง‘๋‹ˆ๋‹ค. ํŠนํžˆ ๋ธŒ๋ผ์šฐ์ € ์—์ด์ „ํŠธ์™€ ๊ฐ™์ด ๋น„๊ณต๊ฐœ ๋ฐ์ดํ„ฐ ์ ‘๊ทผ, ์™ธ๋ถ€ ํ†ต์‹ , ์‹ ๋ขฐํ•  ์ˆ˜ ์—†๋Š” ์ฝ˜ํ…์ธ  ๋…ธ์ถœ์ด๋ผ๋Š” '์น˜๋ช…์  ์‚ผ์ค‘' ์œ„ํ—˜์„ ๊ฐ€์ง„ ๋„๊ตฌ๋“ค์€ ๊ทผ๋ณธ์ ์œผ๋กœ ์•ˆ์ „ํ•˜๊ฒŒ ๋งŒ๋“ค๊ธฐ ์–ด๋ ต๋‹ค๋Š” ๊ฒƒ์ด ๊ทธ์˜ ์˜๊ฒฌ์ž…๋‹ˆ๋‹ค.
Finally, Fowler issues a serious warning that LLMs significantly expand the attack surface of software systems. He opines that tools with the 'lethal triple' risk of accessing private data, communicating externally, and being exposed to untrusted content, such as browser agents, are fundamentally difficult to secure.

์˜ˆ๋ฅผ ๋“ค์–ด, ์›น ํŽ˜์ด์ง€์— ์ธ๊ฐ„์˜ ๋ˆˆ์—๋Š” ๋ณด์ด์ง€ ์•Š๋Š” ๋ช…๋ น์–ด๋ฅผ ์ˆจ๊ฒจ LLM์„ ์†์ด๊ณ , ์ด๋ฅผ ํ†ตํ•ด ๋ฏผ๊ฐํ•œ ๊ฐœ์ธ ์ •๋ณด๋ฅผ ์œ ์ถœํ•˜๋„๋ก ์œ ๋„ํ•˜๋Š” ๊ณต๊ฒฉ์ด ๊ฐ€๋Šฅํ•ด์ง‘๋‹ˆ๋‹ค. ๊ฐœ๋ฐœ์ž๋“ค์€ ์ด์ œ ์ฝ”๋“œ์˜ ๊ธฐ๋Šฅ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ, LLM๊ณผ ์ƒํ˜ธ์ž‘์šฉํ•˜๋Š” ๋ชจ๋“  ๊ณผ์ •์—์„œ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋Š” ์ƒˆ๋กœ์šด ๋ณด์•ˆ ์ทจ์•ฝ์ ์„ ๊ณ ๋ คํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.
For example, it becomes possible to trick an LLM by hiding commands invisible to the human eye on a web page, thereby inducing it to leak sensitive personal information. Developers must now consider not only the functionality of their code but also new security vulnerabilities that can arise in all processes interacting with LLMs.

๐Ÿ’ก

๋งˆํ‹ด ํŒŒ์šธ๋Ÿฌ์˜ LLM ํ•ต์‹ฌ ์ธ์‚ฌ์ดํŠธ
Martin Fowler's Core LLM Insights

ํ™˜๊ฐ์€ ๋ณธ์งˆ:
Hallucination is Intrinsic:
LLM์˜ ํ™˜๊ฐ์€ '๊ฒฐํ•จ'์ด ์•„๋‹Œ '๋ณธ์งˆ์  ํŠน์ง•'์œผ๋กœ ์ดํ•ดํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.
LLM's hallucination must be understood as an 'intrinsic feature,' not a 'flaw.'
๋น„๊ฒฐ์ •์„ฑ์˜ ์‹œ๋Œ€:
The Era of Non-Determinism:
์†Œํ”„ํŠธ์›จ์–ด ๊ณตํ•™์ด ์˜ˆ์ธก ๋ถˆ๊ฐ€๋Šฅ์„ฑ์„ ๊ด€๋ฆฌํ•˜๋Š” ์‹œ๋Œ€๋กœ ์ง„์ž…ํ–ˆ์Šต๋‹ˆ๋‹ค.
Software engineering has entered an era of managing unpredictability.
๊ฒ€์ฆ์€ ํ•„์ˆ˜:
Verification is a Must:
LLM์˜ ๊ฒฐ๊ณผ๋ฌผ์€ ์ฃผ๋‹ˆ์–ด ๊ฐœ๋ฐœ์ž๊ฐ€ ์•„๋‹Œ, ๊ฒ€์ฆ์ด ํ•„์ˆ˜์ ์ธ '๋„๊ตฌ'์˜ ์‚ฐ์ถœ๋ฌผ์ž…๋‹ˆ๋‹ค.
The output of an LLM is not that of a junior developer, but the product of a 'tool' that requires mandatory verification.
๋ณด์•ˆ ์œ„ํ˜‘:
Security Threats:
LLM์€ ์‹œ์Šคํ…œ์˜ ๊ณต๊ฒฉ ํ‘œ๋ฉด์„ ๋„“ํžˆ๋Š” ์ƒˆ๋กœ์šด ๋ณด์•ˆ ๋ณ€์ˆ˜์ž…๋‹ˆ๋‹ค.
LLMs are a new security variable that broadens a system's attack surface.

์ž์ฃผ ๋ฌป๋Š” ์งˆ๋ฌธ ❓
Frequently Asked Questions ❓

Q: ๋งˆํ‹ด ํŒŒ์šธ๋Ÿฌ๊ฐ€ 'ํ™˜๊ฐ'์„ ๊ฒฐํ•จ์ด ์•„๋‹Œ ๋ณธ์งˆ๋กœ ๋ด์•ผ ํ•œ๋‹ค๊ณ  ๋งํ•˜๋Š” ์ด์œ ๋Š” ๋ฌด์—‡์ธ๊ฐ€์š”?
Q: Why does Martin Fowler say that 'hallucination' should be seen as an intrinsic feature, not a flaw?
A: LLM์€ ๋ฐฉ๋Œ€ํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๊ฐ€์žฅ ๊ทธ๋Ÿด๋“ฏํ•œ ๋‹ค์Œ ๋‹จ์–ด๋ฅผ ์˜ˆ์ธกํ•˜์—ฌ ๋ฌธ์žฅ์„ ์ƒ์„ฑํ•˜๋Š” ๋ชจ๋ธ์ด๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ์ด ๊ณผ์ •์—์„œ ์‚ฌ์‹ค๊ด€๊ณ„์™€ ๋ฌด๊ด€ํ•˜๊ฒŒ ๋งค๋„๋Ÿฌ์šด ๋ฌธ์žฅ์„ ๋งŒ๋“ค์–ด๋‚ด๋Š” 'ํ™˜๊ฐ'์€ ์ž์—ฐ์Šค๋Ÿฌ์šด ๊ฒฐ๊ณผ๋ฌผ์ด๋ฉฐ, ์ด ํŠน์„ฑ์„ ์ดํ•ดํ•ด์•ผ LLM์„ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ์˜๋ฏธ์ž…๋‹ˆ๋‹ค.
A: This is because LLMs are models that generate sentences by predicting the most plausible next word based on vast amounts of data. In this process, 'hallucination,' which creates fluent sentences regardless of factual accuracy, is a natural outcome. Understanding this characteristic is key to using LLMs correctly.
Q: ์†Œํ”„ํŠธ์›จ์–ด ๊ณตํ•™์˜ '๋น„๊ฒฐ์ •์„ฑ'์ด๋ž€ ๋ฌด์—‡์„ ์˜๋ฏธํ•˜๋ฉฐ, ์™œ ์ค‘์š”ํ•œ๊ฐ€์š”?
Q: What does 'non-determinism' in software engineering mean, and why is it important?
A: '๋น„๊ฒฐ์ •์„ฑ'์ด๋ž€ ๋™์ผํ•œ ์ž…๋ ฅ์— ๋Œ€ํ•ด ํ•ญ์ƒ ๋™์ผํ•œ ๊ฒฐ๊ณผ๊ฐ€ ๋‚˜์˜ค์ง€ ์•Š๋Š” ํŠน์„ฑ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ์ „ํ†ต์ ์ธ ์†Œํ”„ํŠธ์›จ์–ด๋Š” 100% ์˜ˆ์ธก ๊ฐ€๋Šฅํ•ด์•ผ ํ–ˆ์ง€๋งŒ, LLM์€ ๊ฐ™์€ ์งˆ๋ฌธ์—๋„ ๋‹ค๋ฅธ ๋‹ต๋ณ€์„ ์ค„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ถˆํ™•์‹ค์„ฑ์„ ์ดํ•ดํ•˜๊ณ  ๊ด€๋ฆฌํ•˜๋Š” ๊ฒƒ์ด LLM ์‹œ๋Œ€ ๊ฐœ๋ฐœ์ž์˜ ํ•ต์‹ฌ ์—ญ๋Ÿ‰์ด ๋˜์—ˆ์Šต๋‹ˆ๋‹ค.
A: 'Non-determinism' refers to the characteristic where the same input does not always produce the same output. While traditional software had to be 100% predictable, an LLM can give different answers to the same question. Understanding and managing this uncertainty has become a core competency for developers in the age of LLMs.
Q: LLM์ด ์ƒ์„ฑํ•œ ์ฝ”๋“œ๋ฅผ ์‹ ๋ขฐํ•˜๊ณ  ๋ฐ”๋กœ ์‚ฌ์šฉํ•ด๋„ ๋ ๊นŒ์š”?
Q: Can I trust and use the code generated by an LLM immediately?
A: ์•„๋‹ˆ์š”, ์ ˆ๋Œ€ ์•ˆ ๋ฉ๋‹ˆ๋‹ค. ๋งˆํ‹ด ํŒŒ์šธ๋Ÿฌ๋Š” LLM์ด ๊ทธ๋Ÿด๋“ฏํ•˜์ง€๋งŒ ์ž‘๋™ํ•˜์ง€ ์•Š๊ฑฐ๋‚˜, ๋ณด์•ˆ์— ์ทจ์•ฝํ•œ ์ฝ”๋“œ๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ๋‹ค๊ณ  ๊ฒฝ๊ณ ํ•ฉ๋‹ˆ๋‹ค. ์ƒ์„ฑ๋œ ์ฝ”๋“œ๋Š” ๋ฐ˜๋“œ์‹œ ๊ฐœ๋ฐœ์ž๊ฐ€ ์ง์ ‘ ๊ฒ€ํ† , ํ…Œ์ŠคํŠธ, ๊ฒ€์ฆํ•˜๋Š” ๊ณผ์ •์„ ๊ฑฐ์ณ์•ผ ํ•ฉ๋‹ˆ๋‹ค.
A: No, absolutely not. Martin Fowler warns that LLMs can generate code that looks plausible but doesn't work or is insecure. The generated code must be reviewed, tested, and verified by a developer.
Q: LLM์„ ์‚ฌ์šฉํ•˜๋ฉด ์™œ ๋ณด์•ˆ ์œ„ํ˜‘์ด ์ปค์ง€๋‚˜์š”?
Q: Why do security threats increase with the use of LLMs?
A: LLM์€ ์™ธ๋ถ€ ๋ฐ์ดํ„ฐ์™€ ์ƒํ˜ธ์ž‘์šฉํ•˜๊ณ , ๋•Œ๋กœ๋Š” ๋ฏผ๊ฐํ•œ ์ •๋ณด์— ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ์•…์˜์ ์ธ ์‚ฌ์šฉ์ž๊ฐ€ ์›น์‚ฌ์ดํŠธ๋‚˜ ์ž…๋ ฅ๊ฐ’์— ๋ณด์ด์ง€ ์•Š๋Š” ๋ช…๋ น์–ด๋ฅผ ์ˆจ๊ฒจ LLM์„ ์กฐ์ข…(ํ”„๋กฌํ”„ํŠธ ์ธ์ ์…˜)ํ•˜์—ฌ ์ •๋ณด๋ฅผ ์œ ์ถœํ•˜๊ฑฐ๋‚˜ ์‹œ์Šคํ…œ์„ ๊ณต๊ฒฉํ•˜๋Š” ์ƒˆ๋กœ์šด ํ˜•ํƒœ์˜ ๋ณด์•ˆ ์œ„ํ˜‘์ด ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
A: Because LLMs interact with external data and can sometimes access sensitive information. Malicious users can hide invisible commands in websites or inputs to manipulate the LLM (prompt injection), leading to new types of security threats such as data leakage or system attacks.

๋งˆํ‹ด ํŒŒ์šธ๋Ÿฌ์˜ ํ†ต์ฐฐ์€ LLM์ด๋ผ๋Š” ์ƒˆ๋กœ์šด ๋„๊ตฌ๋ฅผ ์–ด๋–ป๊ฒŒ ๋ฐ”๋ผ๋ณด๊ณ  ์‚ฌ์šฉํ•ด์•ผ ํ•˜๋Š”์ง€์— ๋Œ€ํ•œ ์ค‘์š”ํ•œ ๊ฐ€์ด๋“œ๋ฅผ ์ œ์‹œํ•ฉ๋‹ˆ๋‹ค. ๋‹จ์ˆœํžˆ ํŽธ๋ฆฌํ•œ ์ฝ”๋“œ ์ƒ์„ฑ๊ธฐ๋ฅผ ๋„˜์–ด, ์šฐ๋ฆฌ ๊ฐœ๋ฐœ ํ™˜๊ฒฝ์˜ ๊ทผ๋ณธ์ ์ธ ํŒจ๋Ÿฌ๋‹ค์ž„์„ ๋ฐ”๊พธ๋Š” ์กด์žฌ์ž„์„ ์ธ์‹ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ์˜ ์กฐ์–ธ์ฒ˜๋Ÿผ, ๋‘๋ ค์›Œํ•˜๊ฑฐ๋‚˜ ๋งน์‹ ํ•˜๊ธฐ๋ณด๋‹ค๋Š” ์ ๊ทน์ ์œผ๋กœ ์‹คํ—˜ํ•˜๊ณ  ๊ฒฝํ—˜์„ ๊ณต์œ ํ•˜๋ฉฐ ์ด ๊ฑฐ๋Œ€ํ•œ ๋ณ€ํ™”์˜ ๋ฌผ๊ฒฐ์— ํ˜„๋ช…ํ•˜๊ฒŒ ์˜ฌ๋ผํƒ€์•ผ ํ•  ๋•Œ์ž…๋‹ˆ๋‹ค.
Martin Fowler's insights provide an important guide on how to view and use the new tool that is the LLM. We must recognize it not just as a convenient code generator, but as an entity that is changing the fundamental paradigm of our development environment. As he advises, now is the time to wisely ride this massive wave of change by experimenting and sharing experiences, rather than fearing or blindly trusting it.

์—ฌ๋Ÿฌ๋ถ„์€ LLM์— ๋Œ€ํ•ด ์–ด๋–ป๊ฒŒ ์ƒ๊ฐํ•˜์‹œ๋‚˜์š”? ๊ฐœ๋ฐœ ๊ณผ์ •์—์„œ ๊ฒช์—ˆ๋˜ ํฅ๋ฏธ๋กœ์šด ๊ฒฝํ—˜์ด ์žˆ๋‹ค๋ฉด ๋Œ“๊ธ€๋กœ ๊ณต์œ ํ•ด์ฃผ์„ธ์š”! ๐Ÿ˜Š
What are your thoughts on LLMs? If you have any interesting experiences from your development process, please share them in the comments! ๐Ÿ˜Š

Sunday, August 31, 2025

๋‚˜๋Š” ์ง€์‹œํ•œ๋‹ค, ๊ณ ๋กœ ์ฐฝ์ž‘ํ•œ๋‹ค - AI์™€ ์ฐฝ์ž‘์ž์˜ ์ƒˆ๋กœ์šด ๊ด€๊ณ„, "I Direct, Therefore I Create" - The New Relationship Between AI and the Creator

AI ์‹œ๋Œ€์˜ ์ฐฝ์ž‘์ž, ๋‚˜๋Š” ๋ˆ„๊ตฌ์ธ๊ฐ€? / Who is the Creator in the Age of AI?

AI์—๊ฒŒ '์ง€์‹œ'๋งŒ ๋‚ด๋ฆฐ ์‚ฌ๋žŒ, ๊ณผ์—ฐ ์ฐฝ์ž‘์ž์ผ๊นŒ์š”?
If You Only 'Direct' an AI, Are You Still the Creator?

์ตœ๊ทผ ์ธ๊ณต์ง€๋Šฅ(AI)์„ ํ™œ์šฉํ•ด 15์ดˆ ๋ถ„๋Ÿ‰์˜ ์งง์€ ์˜์ƒ์„ ๋งŒ๋“ค์–ด ๋ณด์•˜์Šต๋‹ˆ๋‹ค. ์ œ๊ฐ€ ํ•œ ๊ฒƒ์ด๋ผ๊ณ ๋Š” ์˜ค์ง ๋‘ ๊ฐ€์ง€ ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ง€์‹œํ•œ ๊ฒƒ๋ฟ์ด์—ˆ์Šต๋‹ˆ๋‹ค.
I recently used artificial intelligence (AI) to create a short, 15-second video. All I did was provide two prompts.

๐Ÿ‘ค ์ธ๋ฌผ ์บ๋ฆญํ„ฐ ์ƒ์„ฑ ํ”„๋กฌํ”„ํŠธ
๐Ÿ‘ค Character Creation Prompt

“๊ฐ์ •์ ์œผ๋กœ ๋นˆํ‹ฐ์ง€ ๋งˆ์ดํฌ์— ๋Œ€๊ณ  ๋…ธ๋ž˜ํ•˜๋Š” ์ Š์€ ์—ฌ์„ฑ์˜ ํด๋กœ์ฆˆ์—… ์ดˆ์ƒํ™”. ์€์€ํ•˜๊ฒŒ ๋ฐ˜์ง์ด๋Š” ์—ฐํ•œ ํ•˜๋Š˜์ƒ‰ ๋“œ๋ ˆ์Šค๋ฅผ ์ž…๊ณ  ์žˆ์œผ๋ฉฐ, ๋ฉ”์ดํฌ์—…์€ ๋ถ€๋“œ๋Ÿฝ๊ณ  ๊ฐ์ • ์–ด๋ฆฐ ๋ˆˆ๋น›๊ณผ ๋ถ‰์€ ์ž…์ˆ ์„ ๊ฐ•์กฐํ•œ๋‹ค. ๋จธ๋ฆฌ๋Š” ๊ฝƒ ์žฅ์‹์ด ๋‹ฌ๋ฆฐ ๋‹จ์ •ํ•œ ์—…์Šคํƒ€์ผ๋กœ ๋ฌถ์—ฌ ์žˆ๋‹ค. ๋ฐฐ๊ฒฝ์€ ์€์€ํ•˜๊ฒŒ ๋น›๋‚˜๋Š” ํŒŒ๋ž€ ์ปคํŠผ๊ณผ ํ๋ฆฟํ•œ ๋”ฐ๋œปํ•œ ์ „๊ตฌ ์กฐ๋ช…์ด ์•„๋ จํ•˜๊ณ  ์นœ๋ฐ€ํ•œ ๋ฌด๋Œ€ ๋ถ„์œ„๊ธฐ๋ฅผ ๋งŒ๋“ ๋‹ค. ์Šคํƒ€์ผ์€ ์‚ฌ์‹ค์ ์ด๊ณ  ์‹œ๋„ค๋งˆํ‹ฑํ•˜๋ฉฐ, ์–ผ๊ตด๊ณผ ๋งˆ์ดํฌ์— ์ดˆ์ ์„ ๋งž์ถ˜ ๊ณ ํ•ด์ƒ๋„ ๋””ํ…Œ์ผ๋กœ ํ‘œํ˜„ํ•œ๋‹ค.”

"A close-up portrait of a young woman singing emotionally into a vintage microphone. She wears a sparkling light-blue dress with thin straps, and her makeup highlights her soft, expressive eyes and red lips. Her hair is styled in a loose elegant updo with a flower accessory. The background has softly glowing blue curtains and blurred warm string lights, creating a dreamy and intimate stage mood. The style should be photorealistic, cinematic, and highly detailed, focusing on her face and microphone."

๐ŸŽต ๊ฐ€์‚ฌ์™€ ์Œ์› ์ƒ์„ฑ ํ”„๋กฌํ”„ํŠธ
๐ŸŽต Lyrics and Music Generation Prompt

“๋ถ€๋“œ๋Ÿฌ์šด ํ”ผ์•„๋…ธ ์„ ์œจ์— ๋งž์ถ”์–ด ๋ถ€๋ฅด๋Š” ์ง„์‹ฌ ์–ด๋ฆฐ ๋ฐœ๋ผ๋“œ. ๋ชฉ์†Œ๋ฆฌ๋Š” ์„ฌ์„ธํ•˜๊ณ  ๊ฐ์ •์ ์ด๋ฉฐ, ์ฒซ์‚ฌ๋ž‘์˜ ๋‹ฌ์ฝค์Œ‰์‹ธ๋ฆ„ํ•œ ๊ธฐ์–ต์„ ํšŒ์ƒํ•œ๋‹ค. ๊ฐ€์‚ฌ๋Š” ํ•œ๊ตญ์–ด๋กœ ํ–ฅ์ˆ˜์™€ ์—ฐ์•ฝํ•จ, ๊ทธ๋ฆฌ๊ณ  ๊ทธ๋ฆฌ์›€์œผ๋กœ ๊ฐ€๋“ ์ฐจ ์žˆ๋‹ค. ์ „์ฒด์ ์ธ ๋ถ„์œ„๊ธฐ๋Š” ์„œ์ •์ ์ด๊ณ  ์‹œ๋„ค๋งˆํ‹ฑํ•˜๋ฉฐ, ์ž”์ž”ํ•œ ๋ฆฌ๋“ฌ๊ณผ ํ’๋ถ€ํ•œ ํ‘œํ˜„๋ ฅ์ด ์žƒ์–ด๋ฒ„๋ฆฐ ์‚ฌ๋ž‘์˜ ์•„ํ””๊ณผ ์•„๋ฆ„๋‹ค์›€์„ ๋™์‹œ์— ๋“œ๋Ÿฌ๋‚ธ๋‹ค.”

"A heartfelt ballad with soft piano melodies, sung in a tender and emotional voice. The song recalls the bittersweet memory of a first love — nostalgic, delicate, and filled with longing. The lyrics are written in Korean, filled with shades of nostalgia, fragility, and yearning. The mood is sentimental and cinematic, with a gentle rhythm and expressive dynamics that reveal both the pain and the beauty of lost love."

์ด ๋‘ ๊ฐ€์ง€ ์ง€์‹œ์— ๋”ฐ๋ผ AI๊ฐ€ ์˜์ƒ์„ ์ฐฝ์ž‘ํ–ˆ๊ณ , ๋ฌด๋ฃŒ ๋ฒ„์ „์ด๋ผ 15์ดˆ ๊ธธ์ด๋กœ ์ œ์ž‘๋˜์—ˆ์Šต๋‹ˆ๋‹ค.
Following these two instructions, the AI created the video. Since I used a free version, it was produced as a 15-second clip.

์˜ˆ์ˆ ๊ณ„์—์„œ๋Š” ์ž‘ํ’ˆ์„ ์ง์ ‘ ์ œ์ž‘ํ•˜์ง€ ์•Š๋”๋ผ๋„ ์ฐฝ์ž‘ ๊ณผ์ •์„ ๊ธฐํšํ•˜๊ฑฐ๋‚˜ ๋ฐฉํ–ฅ์„ ์ œ์‹œํ•œ ์‚ฌ๋žŒ ๋˜ํ•œ ์ฐฝ์ž‘์ž๋กœ ์ธ์ •๋œ๋‹ค๊ณ  ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ ‡๋‹ค๋ฉด ์ € ์—ญ์‹œ ์ฐฝ์ž‘์ž๋กœ ๋ถˆ๋ฆด ์ˆ˜ ์žˆ์„๊นŒ์š”? ๊ธฐ์ˆ ์ด ๋„ˆ๋ฌด ๋น ๋ฅด๊ฒŒ ์ง„ํ™”ํ•˜๋Š” ์˜ค๋Š˜๋‚ , ์ด๋Ÿฌํ•œ ๋ฌผ์Œ์„ ๊ณฑ์”น์„ ํ‹ˆ๋„ ์—†์ด ์ƒˆ๋กœ์šด ํ˜„์‹ค์ด ์šฐ๋ฆฌ ์•ž์— ํŽผ์ณ์ง€๊ณ  ์žˆ์Œ์„ ์‹ค๊ฐํ•ฉ๋‹ˆ๋‹ค.
In the art world, it's said that even if someone doesn't physically create the work, the person who plans or directs the creative process is also recognized as a creator. If that's the case, can I too be called a creator? With technology evolving so rapidly today, I realize that a new reality is unfolding before us, leaving little time to ponder such questions.

์˜ˆ์ˆ ์˜ ์˜ค๋žœ ์งˆ๋ฌธ: "์ฐฝ์ž‘์ž"๋Š” ๋ˆ„๊ตฌ์ธ๊ฐ€?
Art's Enduring Question: Who is the "Creator"?

์˜ˆ์ˆ ๊ณ„์—์„œ๋Š” ์˜ค๋ž˜์ „๋ถ€ํ„ฐ “์ž‘ํ’ˆ์˜ ์ฐฝ์ž‘์ž๋ž€ ๋ˆ„๊ตฌ์ธ๊ฐ€”๋ผ๋Š” ๋…ผ์Ÿ์ด ์ด์–ด์ ธ ์™”์Šต๋‹ˆ๋‹ค. ์‹ค์งˆ์ ์œผ๋กœ ๋ถ“์„ ๋“ค์ง€ ์•Š์•˜๋”๋ผ๋„ ์ž‘ํ’ˆ์˜ ๊ธฐํš, ๊ตฌ๋„, ๊ฐœ๋…์„ ์ œ์‹œํ•œ ์‚ฌ๋žŒ์€ ‘์ฐฝ์ž‘์ž’๋กœ ๊ฐ„์ฃผ๋˜์–ด ์™”์Šต๋‹ˆ๋‹ค. ๋Œ€ํ‘œ์ ์œผ๋กœ ๋’ค์ƒน(Marcel Duchamp)์˜ ๋ ˆ๋””๋ฉ”์ด๋“œ(Ready-made) ์˜ˆ์ˆ ์ด ๋ณด์—ฌ์ฃผ๋“ฏ, ๋ฌผ๊ฑด ์ž์ฒด๋ฅผ ๋งŒ๋“  ์‚ฌ๋žŒ์ด ์•„๋‹Œ, ๊ทธ๊ฒƒ์„ ์˜ˆ์ˆ ์˜ ๋งฅ๋ฝ์œผ๋กœ ๋Œ์–ด์˜ฌ๋ฆฐ ์‚ฌ๋žŒ์ด ์ฐฝ์ž‘์ž๋กœ ๋ถˆ๋ ธ์Šต๋‹ˆ๋‹ค.
For a long time, the art world has debated the question, "Who is the creator of a work of art?" Even those who didn't physically hold the brush have been considered 'creators' if they provided the concept, composition, and plan for the piece. A prime example is Marcel Duchamp's "Ready-made" art. The person who elevated an object into the context of art, not the person who manufactured the object itself, was called the creator.

์ด๋Š” ์ฐฝ์ž‘์˜ ๋ณธ์งˆ์ด ํ–‰์œ„์˜ ๋ฌผ๋ฆฌ์  ๊ตฌํ˜„์ด ์•„๋‹ˆ๋ผ ‘์˜๋„์˜ ์ง€์‹œ์™€ ๊ธฐํš’์— ์žˆ๋‹ค๋Š” ์ฒ ํ•™์  ์ „์ œ๋ฅผ ๋ฐ˜์˜ํ•ฉ๋‹ˆ๋‹ค.
This reflects the philosophical premise that the essence of creation lies not in the physical act of making, but in the 'intention, direction, and planning.'

์ง€์‹œ์ž vs ์ฐฝ์ž‘์ž: ๋ฒ•๊ณผ ํ˜„์‹ค์˜ ๊ฐ„๊ทน
The Director vs. The Creator: A Gap Between Law and Reality

๊ฒฐ๊ตญ, ์ €๋Š” AI์—๊ฒŒ ๋‘ ๊ฐœ์˜ ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ œ์‹œํ•จ์œผ๋กœ์จ ์ฐฝ์ž‘ ํ–‰์œ„๋ฅผ ‘์ง€์‹œ·๊ฐ๋…’ํ–ˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ์ˆ ์ฒ ํ•™์ ์œผ๋กœ๋Š” ์ด๋Ÿฌํ•œ ํ–‰์œ„ ์ž์ฒด๊ฐ€ ์ฐฝ์ž‘์˜ ๋ณธ์งˆ์— ๊ฐ€๊น๋‹ค๊ณ  ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ๋ฒ•์ ์œผ๋กœ๋Š”, ์•„์ง๊นŒ์ง€ ์ €์™€ ๊ฐ™์€ “์ง€์‹œ์ž”์˜ ์ง€์œ„๊ฐ€ ์ฐฝ์ž‘์ž๋กœ ์ „๋ฉด์ ์œผ๋กœ ์ธ์ •๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋‹ค๋งŒ ์ด ๊ฐ„๊ทน์ด์•ผ๋ง๋กœ ๋ฒ•์ฒ ํ•™์  ์‚ฌ์œ ์˜ ์ถœ๋ฐœ์ ์ด์ž, ๊ธฐ์ˆ  ๋ฐœ์ „์ด ์ œ๊ธฐํ•˜๋Š” ์ƒˆ๋กœ์šด ์งˆ๋ฌธ์ž…๋‹ˆ๋‹ค.
Ultimately, by providing two prompts to the AI, I 'directed and supervised' a creative act. From a philosophical standpoint, this act itself can be seen as close to the essence of creation. Legally, however, the status of a "director" like myself is not yet fully recognized as that of a creator. This very gap is the starting point for legal and philosophical inquiry and a new question posed by technological advancement.

๋”ฐ๋ผ์„œ ์ €๋Š” ์Šค์Šค๋กœ์—๊ฒŒ ๋ฌป์Šต๋‹ˆ๋‹ค.
Therefore, I ask myself:

“์ž‘ํ’ˆ์˜ ์™„์„ฑ๋œ ๋ฌผ๋ฆฌ์  ํ˜•ํƒœ๋ฅผ ๋งŒ๋“  ๊ฒƒ์ด ์ค‘์š”ํ•œ๊ฐ€, ์•„๋‹ˆ๋ฉด ๊ทธ ๊ณผ์ •์„ ๊ธฐํšํ•˜๊ณ  ๋ฐฉํ–ฅ์„ ์ •ํ•œ ๊ฒƒ์ด ์ค‘์š”ํ•œ๊ฐ€?”
"Is it more important to have created the final physical form of a work, or to have planned and directed the process?"

๊ธฐ์ˆ ์€ ์ด๋ฏธ ์ด ์งˆ๋ฌธ์„ ์šฐ๋ฆฌ์—๊ฒŒ ๊ฐ•์š”ํ•˜๊ณ  ์žˆ์œผ๋ฉฐ, ๋ฒ•๊ณผ ์ฒ ํ•™์€ ์ด์ œ ๊ทธ ๋‹ต์„ ์ƒˆ๋กญ๊ฒŒ ๋ชจ์ƒ‰ํ•ด์•ผ ํ•˜๋Š” ์‹œ์ ์— ์™€ ์žˆ์Šต๋‹ˆ๋‹ค.
Technology is already forcing this question upon us, and now law and philosophy must find a new answer.

์•„๋ž˜์—์„œ AI๊ฐ€ ์ƒ์„ฑํ•œ ์˜์ƒ์„ ์ง์ ‘ ํ™•์ธํ•ด ๋ณด์„ธ์š”.
You can watch the video generated by the AI below.

Can AI Be Your Paralegal? (Only if You Follow This 5-Step Verification Process)

  Blogging_CS · Sep 20, 2025 · 10 min read Generative AI promises to revo...