AI_LIST
The AI_LIST function uses a large language model (LLM) to generate lists of items based on a textual prompt and optional context information. By default, the function leverages Mistral AI’s chat completion API, though any OpenAI-compatible endpoint can be configured. This approach automates list generation tasks such as brainstorming ideas, identifying requirements, outlining action items, and discovering relevant options—domains where LLMs excel at understanding intent and producing comprehensive, contextually relevant results.
The function sends a prompt (e.g., “list compliance requirements for healthcare organizations”, “list marketing KPIs”) along with optional context values to the LLM. The model generates a list and returns results as a 2D array (single column) with one item per row, suitable for Excel’s spill behavior. The optional values parameter provides additional context that influences list generation—for example, when generating priority action items, providing quarterly business review notes guides the model to extract actionable items from that specific content.
Under the hood, the function uses the Mistral Chat Completions API with JSON mode enabled via the response_format parameter, ensuring structured and parseable output. Key parameters control behavior: temperature (default 0, range 0.0–2.0) manages output variety—lower values like 0 produce consistent, deterministic lists ideal for standardized requirements, while higher values introduce creative variation (Mistral recommends values between 0.0 and 0.7 for most use cases). The max_tokens parameter (default 1000, range 5–5000) limits response length and API costs.
Common applications include generating risk mitigation strategies, identifying project requirements, brainstorming marketing ideas, discovering compliance checkpoints, outlining presentation topics, and listing product features. The LLM approach provides more comprehensive and contextually aware results compared to keyword matching or rule-based systems. For more information on available models and configuration options, see the Mistral AI documentation.
This example function is provided as-is without any representation of accuracy.
Excel Usage
=AI_LIST(prompt, api_key, values, temperature, max_tokens, model, api_url)
prompt(str, required): The request describing what list items to generateapi_key(str, required): API key for authentication.values(list[list], optional, default: null): Optional 2D list of context values to inform list generationtemperature(float, optional, default: 0): Controls randomness in AI response (0.0 = deterministic, 2.0 = highly random)max_tokens(int, optional, default: 1000): Maximum tokens in the AI response (5 to 5000)model(str, optional, default: “codestral-2508”): Model ID to use. Default is “codestral-2508”.api_url(str, optional, default: “https://api.mistral.ai/v1/chat/completions”): OpenAI-compatible API endpoint URL. Default is “https://api.mistral.ai/v1/chat/completions”.
Returns (list[list]): 2D list (single column) of items, or an error message string on failure.
Example 1: Demo case 1
Inputs:
| prompt | temperature | max_tokens | model |
|---|---|---|---|
| List essential marketing KPIs for quarterly performance reviews | 0 | 1000 | codestral-2508 |
Excel formula:
=AI_LIST("List essential marketing KPIs for quarterly performance reviews", 0, 1000, "codestral-2508")
Expected output:
| Revenue Growth |
|---|
| Customer Acquisition Cost (CAC) |
| Customer Lifetime Value (CLV) |
| Conversion Rate |
| Click-Through Rate (CTR) |
| Return on Ad Spend (ROAS) |
| Engagement Rate |
| Social Media Metrics (Likes, Shares, Followers) |
| Website Traffic |
| Bounce Rate |
| Average Session Duration |
| Lead Generation |
| Sales Conversion Rate |
| Net Promoter Score (NPS) |
| Customer Satisfaction Score (CSAT) |
| Marketing Spend Efficiency |
| Return on Investment (ROI) |
Example 2: Demo case 2
Inputs:
| prompt | temperature | max_tokens | model |
|---|---|---|---|
| List effective risk mitigation strategies for enterprise software implementation | 0 | 1000 | codestral-2508 |
Excel formula:
=AI_LIST("List effective risk mitigation strategies for enterprise software implementation", 0, 1000, "codestral-2508")
Expected output:
| Conduct a thorough risk assessment |
|---|
| Implement a robust change management process |
| Establish clear roles and responsibilities |
| Use a phased approach to implementation |
| Leverage agile methodologies |
| Conduct regular testing and quality assurance |
| Ensure comprehensive training and support |
| Implement a rollback plan |
| Monitor and review the implementation process |
| Foster a culture of continuous improvement |
Example 3: Demo case 3
Inputs:
| prompt | temperature | max_tokens | model |
|---|---|---|---|
| List key compliance requirements for healthcare organizations | 0 | 1000 | codestral-2508 |
Excel formula:
=AI_LIST("List key compliance requirements for healthcare organizations", 0, 1000, "codestral-2508")
Expected output:
| HIPAA Privacy Rule |
|---|
| HIPAA Security Rule |
| HITRUST Common Security Framework |
| NIST Cybersecurity Framework |
| ISO 27001 |
| GDPR |
| CCPA |
| HIPAA Omnibus Rule |
| ONC Privacy and Security Final Rule |
| State and Local Laws and Regulations |
Example 4: Demo case 4
Inputs:
| prompt | values | temperature | max_tokens | model |
|---|---|---|---|---|
| List priority action items based on these quarterly business review notes: | Q1 revenue fell 5% below target | 0 | 1000 | codestral-2508 |
| Customer complaints increased by 12% | ||||
| New product launch delayed by 3 weeks |
Excel formula:
=AI_LIST("List priority action items based on these quarterly business review notes:", {"Q1 revenue fell 5% below target";"Customer complaints increased by 12%";"New product launch delayed by 3 weeks"}, 0, 1000, "codestral-2508")
Expected output:
| Investigate Q1 revenue decline |
|---|
| Address increased customer complaints |
| Accelerate new product launch |
Python Code
Show Code
import requests
import json
def ai_list(prompt, api_key, values=None, temperature=0, max_tokens=1000, model='codestral-2508', api_url='https://api.mistral.ai/v1/chat/completions'):
"""
Generate a list of items using an AI model based on a prompt and optional context values.
This example function is provided as-is without any representation of accuracy.
Args:
prompt (str): The request describing what list items to generate
api_key (str): API key for authentication.
values (list[list], optional): Optional 2D list of context values to inform list generation Default is None.
temperature (float, optional): Controls randomness in AI response (0.0 = deterministic, 2.0 = highly random) Default is 0.
max_tokens (int, optional): Maximum tokens in the AI response (5 to 5000) Default is 1000.
model (str, optional): Model ID to use. Default is "codestral-2508". Default is 'codestral-2508'.
api_url (str, optional): OpenAI-compatible API endpoint URL. Default is "https://api.mistral.ai/v1/chat/completions". Default is 'https://api.mistral.ai/v1/chat/completions'.
Returns:
list[list]: 2D list (single column) of items, or an error message string on failure.
"""
if not api_key:
return "You must include an API key to use this function. Sign up for a free API key at https://aistudio.google.com/, https://console.mistral.ai/, or other providers and add your own api_key. You may use any OpenAI compatible API, just update the api_url parameter."
# Validate temperature
if not isinstance(temperature, (float, int)) or not (0 <= float(temperature) <= 2):
return "Error: temperature must be a float between 0 and 2 (inclusive)"
# Validate max_tokens
if not isinstance(max_tokens, int) or not (5 <= max_tokens <= 5000):
return "Error: max_tokens must be an integer between 5 and 5000 (inclusive)"
# Build prompt
list_prompt = f"Generate a list based on this request: {prompt}"
if values is not None:
values_str = "\n".join([str(item[0]) for item in values]) if len(values) > 0 and len(values[0]) > 0 else ""
if values_str:
list_prompt += f"\n\nUse this information to help create the list:\n{values_str}"
list_prompt += "\nReturn ONLY a JSON object with a key 'items' whose value is a JSON array of the items for the list. "
list_prompt += "Each item should be a single value. "
list_prompt += "Do not include any explanatory text, just the JSON object. "
list_prompt += 'For example: {"items": ["item1", "item2", "item3"]}'
# Prepare payload
payload = {
"messages": [{"role": "user", "content": list_prompt}],
"temperature": temperature,
"model": model,
"max_tokens": max_tokens,
"response_format": {"type": "json_object"}
}
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json",
"Accept": "application/json"
}
# Make API request
try:
response = requests.post(api_url, headers=headers, json=payload)
if response.status_code == 429:
return "Error: You have hit the rate limit for the API. Please try again later"
response.raise_for_status()
response_data = response.json()
content = response_data["choices"][0]["message"]["content"]
try:
list_data = json.loads(content)
if isinstance(list_data, dict) and "items" in list_data:
list_data = list_data["items"]
elif isinstance(list_data, dict):
for key, value in list_data.items():
if isinstance(value, list):
list_data = value
break
if isinstance(list_data, list):
result = []
for item in list_data:
if isinstance(item, list):
if len(item) >= 1:
result.append([str(item[0])])
else:
result.append([""])
else:
result.append([str(item)])
return result
else:
return "Error: Unable to parse response. Expected a list"
except (json.JSONDecodeError, ValueError):
return "Error: Unable to generate list. The AI response wasn't in the expected format"
except requests.exceptions.RequestException as e:
return f"Error: API request failed. {str(e)}"