Replies: 3 comments
-
|
Great question - this gets to the heart of what "agents" actually means. The Spectrum of AgencyCrewAI sits on the right side - it's about orchestration and coordination, not just wrapping API calls. What Makes It "Agentic"
The Real Value# This is just an LLM call:
response = llm.generate("Write a report")
# This is agentic:
crew = Crew(
agents=[researcher, writer, editor],
tasks=[research_task, writing_task, review_task],
process=Process.sequential
)
# Each agent has context, tools, and builds on previous workWhen CrewAI Shines
The "magic" isn't in individual LLM calls - it's in the coordination layer that makes multiple specialized agents work together coherently. More on coordination patterns: https://github.com/KeepALifeUS/autonomous-agents |
Beta Was this translation helpful? Give feedback.
-
|
Great question! You have identified the spectrum perfectly. Your implementation: Orchestrated LLM Workflow
True agentic behavior in CrewAI: # Enable agent autonomy
agent = Agent(
role="Course Evaluator",
goal="Evaluate course relevancy against industry trends",
allow_delegation=True, # Can spawn sub-agents
verbose=True,
tools=[web_search, db_query, document_reader],
)
# Task with open-ended goal
task = Task(
description="Analyze if CS101 curriculum covers current industry needs. Research job postings, compare against syllabus, identify gaps.",
expected_output="Detailed report with recommendations",
agent=agent,
# No prescribed steps — agent decides how to accomplish
)What makes it agentic:
For your course relevancy project: # Agentic version
task = Task(
description="Determine if our Data Structures course prepares students for 2026 job market. Use any methods you need.",
# Agent will: search job postings, read syllabus, compare, iterate
)
# Workflow version (what you built)
task1 = Task(description="Get job postings", ...)
task2 = Task(description="Get syllabus", ...)
task3 = Task(description="Compare", ...)Honest take: Most production CrewAI is closer to your implementation — orchestrated workflows. True agency is harder to control and debug. We build both at Revolution AI — workflows for reliability, agents for exploration. |
Beta Was this translation helpful? Give feedback.
-
|
Great question! At RevolutionAI (https://revolutionai.io) we use CrewAI heavily. It is both:
The value: # Without CrewAI: manual everything
prompt = f"You are {role}. Task: {task}. Tools: {tools}"
response = llm.complete(prompt)
result = parse_output(response)
# With CrewAI: declarative
agent = Agent(role=role, goal=goal, tools=tools)
crew = Crew(agents=[agent], tasks=[task])
result = crew.kickoff()When worth it:
For simple single-calls, raw LLM is fine! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I am a faculty member at a US University. I got interested in crewai because we have an upcoming project where we evaluate our courses and compare them against industry trends and so on for relevancy discussions. I thought it may work for this use case.
I decided to implement a simple test for just grading and see how it goes. I created three “agents”, one to pull a student’s discussion posts from the discussion forum, one to grade based on a rubric and instructions and one to use the grading results to craft a feedback response. Also one more at the end that just looks at all the results and provides a summary for the instructor. It worked fine.
The issue is, what I implemented is just a series of LLM calls and not much more. One pushes the forum export and receives that student’s specific work. Grader pushes that plus rubric and grading instructions, receives an evaluation. Feedback writer pushes that plus instructions on tone etc and receives an email. I could easily do all of this manually, using custom gpts or gemini gems. This is nice automation, but I am not seeing the agent angle.
For me, agents imply:
A goal or objective.
The ability to plan or decompose that goal into tasks.
Iterative reasoning with feedback loops.
Some notion of state and progress.
A stopping condition that is internally determined rather than externally scripted.
That implies loops, reflection, self correction, tool use decisions, and termination logic that emerges from the agent’s own reasoning rather than being told specifically what to do.Is the difference I am seeing here because of my implementation? My real project of looking at courses and their relevancy wouldn’t be all that different. It would still be a bunch of calls to gather various bits of information, and then calling an LLM to evaluate all of it together.
Don't get me wrong, If crewai is not really an agent framework but an automated managed workflow of LLM calls, there is nothing wrong with that. This was helpful to me, and the other project would also benefit from automation. I just want to understand the terms and what I am doing. If I left some capabilities unexplored and I can tap into more agentic behavior as I described above, that's great to learn.
Beta Was this translation helpful? Give feedback.
All reactions