Chat
The chat
endpoint is an AI assistant infused with live news. You can ask it questions about the news, and it will respond with the latest information. You can also choose the model you'd like to use:
gpt-4o-mini
Fast response, but less intelligent (default, Fast tier).gpt-4o
Fast response, rich and intelligent (Rich tier).claude-3-5-sonnet-latest
The latest Claude model (Rich tier).meta-llama/Meta-Llama-3.1-70B-Instruct
Defaults to latest Llama-3.1-70b model (Fast tier).meta-llama/Meta-Llama-3.3-70B-Instruct
Defaults to latest Llama-3.3-70b model (Fast tier).meta-llama/Meta-Llama-3.1-405B-Instruct
Defaults to latest Llama-3.1-405b model (Rich tier).
Wanna test it out or request a new model? You can come to our Discord server to talk directly to the bot, or ask us to integrate new models
Querying the Chat endpoint
The chat endpoint follows the OpenAI format for chat completions. The simplest way to use it is to use our pre-built SDK. The SDK will handle the authentication and response parsing for you. If you prefer to use the OpenAI SDK, you can use the same endpoint with the same parameters.
AskNews SDK
1from asknews_sdk import AskNewsSDK
2
3sdk = AskNewsSDK(
4 client_id="your_client_id",
5 client_secret="your_client_secret",
6 scopes=["chat", "news", "stories"]
7)
8
9def chat_query():
10 response = sdk.chat.get_chat_completions(
11 messages=[
12 {
13 "role": "user",
14 "content": "What's going on in the latest tech news?"
15 }
16 ],
17 stream=False
18 )
19
20 # response object maches the OpenAI SDK response object:
21 print(response.choices[0].message.content)
22
23chat_query()
Controlling the output
You can control many aspects of the output including:
- inline_citations: Whether to include inline citations in the response, and how to represent those (e.g. markdown links or numbers)
- journalist_mode: Whether to activate our journalist mode, which ensures higher journalistic integrity, such as supporting claims with evidence and improved journalistic styling, better self citing. If journalist_mode is deactivated, the AI operates with a barebones prompt, it is up to you to add your prompt for handling citations and output style.
- append_references: Whether or not to append all the references to the end of the output with image links and article links.
- asknews_watermark: Whether or not to inclue "Generated by AI at AskNews on (date)" at the end of the output. This is important for tracking and transparency if used in public domains, but you can remove it by setting this parameter to False.
1from asknews_sdk import AskNewsSDK
2
3sdk = AskNewsSDK(
4 client_id="your_client_id",
5 client_secret="your_client_secret",
6 scopes=["chat", "news", "stories"]
7)
8
9def chat_query():
10 response = sdk.chat.get_chat_completions(
11 messages=[
12 {
13 "role": "user",
14 "content": "What's going on in the latest tech news?"
15 }
16 ],
17 stream=False,
18 inline_citations="markdown_link",
19 append_citations=False,
20 journalist_mode=True
21 )
22
23 # response object maches the OpenAI SDK response object:
24 print(response.choices[0].message.content)
25
26chat_query()
filter_params
for targeted news chats Using
You can use the filter_params
parameter to target your chat completions based on all the same filter parameters available in /news. This is useful if you want to ask questions about a specific article or set of articles, a specific source, a specific reporting voice, a specific graph, etc. The filter_params
parameter is a dictionary that matches the parameters available in the /news endpoint.
Let's say you want to focus your chat on a specific subset of tech news:
1from asknews_sdk import AskNewsSDK
2
3ask = AskNewsSDK()
4
5def chat_query():
6 response = ask.chat.get_chat_completions(
7 messages=[
8 {
9 "role": "user",
10 "content": "What's going on in the latest tech news?"
11 }
12 ],
13 filter_params={
14 "query": "tech trends during 2024 in mobile devices and software",
15 "hours_back": 1440,
16 "categories": ["Technology"],
17 "sources": ["TechCrunch"],
18 "reverse_string_guarantee": ["Apple"]
19 }
20 )
21
22 # response object maches the OpenAI SDK response object:
23 print(response.choices[0].message.content)
If you want to automate the control of the filter_params
, you can do so by calling our /autofilter endpoint, which allows you to describe your news filter in natural language, and returns a dictionary of parameters that can be passed directly to the filter_params
parameter.
1from asknews_sdk import AskNewsSDK
2
3ask = AskNewsSDK()
4
5def chat_query():
6 autofilter = ask.chat.get_autofilter(
7 query="Get me all the tech news related to mobile devices and software from the past 3 months, but dont include any news about Apple."
8 )
9
10 response = ask.chat.get_chat_completions(
11 messages=[
12 {
13 "role": "user",
14 "content": "What's going on in the latest tech news?"
15 }
16 ],
17 filter_params=autofilter.model_json(mode="json")
18 )
19
20 # response object maches the OpenAI SDK response object:
21 print(response.choices[0].message.content)
OpenAI SDK
It is possible to use the OpenAI SDK to query the chat endpoint. The AskNews API uses OAuth 2.0 for authentication instead of API keys. In order to use the OpenAI SDK with the AskNews API you must first obtain an access token, then pass it in the Authorization
header much like you would with an API key. Luckily, there are plenty of libraries that can help handle this for you. The OpenAI SDK uses httpx
underneath the hood, and allows us to pass a custom client when using it. For the OAuth 2.0 handling, there is a great simple little package named httpx_auth
that already implements the OAuth 2.0 handling for us. Below is an example of how to use the OpenAI SDK with these libraries to easily set this up:
1import asyncio
2from httpx import AsyncClient
3from openai import AsyncOpenAI
4from httpx_auth import OAuth2ClientCredentials
5from httpx_auth import OAuth2, JsonTokenFileCache
6
7# This isn't explcitily required, but it helps to not request a token for every
8# request. The token is cached in a file and reused until it expires.
9OAuth2.token_cache = JsonTokenFileCache('path/to/my_token_cache.json')
10
11openai_client = AsyncOpenAI(
12 api_key="", # Set to empty string so the SDK doesn't complain
13 base_url="https://api.asknews.app/v1/openai",
14 http_client=AsyncClient(
15 auth=OAuth2ClientCredentials(
16 client_id="your_client_id",
17 client_secret="your_client_secret",
18 token_url="https://auth.asknews.app/oauth2/token",
19 scope="chat", # The `chat` scope is required for the chat endpoint
20 )
21 ),
22)
23
24
25async def main():
26 completion = await openai_client.chat.completions.create(
27 model="gpt-3.5-turbo-16k",
28 messages=[
29 {"role": "user", "content": "What is the current state of bitcoin?"},
30 ],
31 )
32 print(completion)
33
34asyncio.run(main())
You can also include the extra AskNews parameters in your OpenAI client (or langchain client) requests by adding them to the extra_body
parameter.
1async def main():
2 completion = await openai_client.chat.completions.create(
3 model="gpt-3.5-turbo-16k",
4 messages=[
5 {"role": "user", "content": "What is the current state of bitcoin?"},
6 ],
7 extra_body={
8 "inline_citations": "markdown_link",
9 "append_citations": False,
10 "journalist_mode": True
11 }
12 )
13 print(completion)
14
15asyncio.run(main())
Detailed and updated response structures are always available in the API reference