AI is the coolest thing in tech right now, but getting an AI-powered website up and running can seem pretty daunting. Luckily, there are a bunch of useful tools to make it easier.
A while back we started seeing if we could use large language models to provide a helpful assistant for PythonAnywhere; we found that the capabilities (and perhaps more importantly, our own AI skills) aren’t quite there yet, but it was a lot of fun, and it felt like it would be a good basis for a new tutorial :-)
So, would you like to create your own personal PythonAnywhere guru – albeit one that occasionally gets things wrong or makes things up? This tutorial shows you how to set up a website that uses OpenAI’s libraries, but can answer questions about our site with more in-depth knowledge than ChatGPT has on its own. We’ll also touch on how to run much simpler AI models on PythonAnywhere itself, without needing to use external APIs. It’s meant as a jumping-off point – you’ll build something and understand it well enough that you can start customizing it to do something you want to do.
You can go through all of this tutorial with a free PythonAnywhere account, but you will need an OpenAI account for the second part; as of this post, they give you US$5 worth of free usage when you initially sign up, which will be more than enough for this tutorial. I’ll assume that you have some basic Python and Flask knowledge, and know the basics of using PythonAnywhere; if you don’t, it would be a good idea to check out our Flask tutorial first, at least up to the “Bring on the database” section.
Step 1: sentiment classification with a Hugging Face model¶
Let’s get started! If you already have a PythonAnywhere account, I recommend you sign up for a second free one (you can use the same email address as the one for your main account) because the packages that we’re going to install will use up quite a bit of disk space. So, sign up, and then start a Bash console. In there, we’ll install a library that we’re going to use to run some AI code on PythonAnywhere itself: transformers, which makes it easy to download and run free AI models that other people have put on the Hugging Face AI community website.
To do that, run this command:
pip install --user transformers
This will take a minute or so to run.
Once everything’s installed, let’s try them out! We’re going to use a model called distilbert-base-uncased-emotion, which was created by Hugging Face user bhadresh-savani. It is able to classify text based on a fixed set of possible emotional labels: sadness, joy, love, anger, fear and surprise.
First step, let’s start up an IPython interpreter
ipython
…and then run this code to download and install the model from Hugging Face:
from transformers import pipeline
classifier = pipeline("text-classification",model='bhadresh-savani/distilbert-base-uncased-emotion', return_all_scores=True)
You’ll see the transformers library download the model:
You may also get an error about CUDA like the one at the top of that screenshot, saying that it can’t load the CUDA library. As it says, “Ignore above cudart dlerror if you do not have a GPU set up on your machine” – you’ll be running the model on PythonAnywhere, with no GPU support, so it’s not a problem.
Now, let’s try using it!
prediction = classifier("PythonAnywhere is awesome")
from pprint import pprint
pprint(prediction)
The output contains a list, with a dict for each possible label, associated with a score that represents (on a scale of 0 to 1) how likely that label is the appropriate one for the sentence – you can see that it’s chosing “joy” as the most likely, with a score of 0.996… – which is a pretty good choice :-)
Let’s try the opposite:
prediction = classifier("PythonAnywhere sucks!")
pprint(prediction)
Again, not a bad label for that (obviously completely incorrect) sentence.
But let’s quickly revisit one thing in the output when we downloaded that model:
That’s more than 250MiB for one simple model – and I picked that one for this tutorial because it’s a nice small one! We’ll talk more about that later.
Step 2: a sentiment classification website using Flask¶
Let’s bake what we have right now into a website. Go to the “Web” page inside PythonAnywhere (you can use the “hamburger”) at the top right of the page to get there quickly:
Click “Add a new web app” on the left
We’ll have it running on the default domain, so just click next on the first screen
Pick “Flask” on the next
Then the most recent Python version (which will be the default for your account) on the next:
…and then just accept the default location for the website’s code by clicking “Next”:
Your website will be created, and you’ll get a page showing its configuration:
Open the website itself in a new browser tab by right-clicking on the website name at the top, and it should look like this:
Leave that browser tab open, as we can use it to test our site once we start adding stuff to it.
Back in the other tab where we’re looking at the PythonAnywhere “Web” page, where we created the site, scroll down to the “Code” section and click on the “Go to directory” link in the “Source code” row:
…and in the file browser that comes up, click on the file flask_app.py
to load up our code.
We have this example setup code:
# A very simple Flask Hello World app for you to get started with...
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'Hello from Flask!'
What we want to do is change that into a simple Flask site that prompts the user with a page where they can enter some text, and be told what the most likely emotion expressed in that code might be. Here it is, using a slightly modified version of the prediction code we had above:
from flask import Flask, request, render_template_string
from transformers import pipeline
app = Flask(__name__)
classifier = None
def get_classifier():
global classifier
if classifier is None:
classifier = pipeline("text-classification", model='bhadresh-savani/distilbert-base-uncased-emotion', return_all_scores=True)
return classifier
@app.route('/', methods=['GET', 'POST'])
def classify_emotion():
if request.method == 'POST':
text = request.form['text']
classifier = get_classifier()
prediction = classifier(text)
max_emotion = max(prediction[0], key=lambda x: x['score'])
result = f"The most probable emotion is '{max_emotion['label']}' with a score of {max_emotion['score']:.2f}"
return render_template_string('''
<h1>Emotion Classification Result</h1>
<p><b>You entered</b><p>
<pre style="white-space: pre-wrap;">{{ text }}</pre>
<p>{{ result }}</p>
<a href="/">Go Back</a>
''', text=text, result=result)
return render_template_string('''
<h1>Enter a sentence to classify its emotion:</h1>
<form method="post">
<textarea name="text" rows="4" cols="50"></textarea><br><br>
<input type="submit" value="Classify">
</form>
''')
Before going on to an explanation, let’s give it a whirl! Replace the existing code with the code above, save it, then click the reload button at the top of the page:
…and then go to the browser tab where you have the website open. Refresh the page, and…
We’ve created an AI-powered website, running on PythonAnywhere, in just a few minutes :-)
Let’s take a look at the code. I won’t explain the Flask stuff in detail, as that should be pretty clear. And the classification code is basically the same as what we ran in the console earlier, except that we’re pulling out the highest-scoring emotion and just showing that instead of showing the scores for all of them.
However, one thing that will probably look a bit strange is this:
classifier = None
def get_classifier():
global classifier
if classifier is None:
classifier = pipeline("text-classification", model='bhadresh-savani/distilbert-base-uncased-emotion', return_all_scores=True)
return classifier
Why, you might wonder, do we not just do this outside of any of the functions:
classifier = pipeline("text-classification", model='bhadresh-savani/distilbert-base-uncased-emotion', return_all_scores=True)
…and then use the classifier
directly inside the classify_emotion
function? The reason is that code outside functions is
run when a website is first started, and then the server process forks off into one or more subprocesses to handle requests. If we used a classifier
that we’d created prior to the fork, all of those subprocesses would share the same one. So we need
to have a classifier for each subprocess so that they don’t trip over each other by trying to use the same one at the same time. Using
a “singleton-like” pattern like that in your website code is generally a good idea to prevent problems.
(We could, of course, have just created the classifier inside classify_emotion
, creating a new one for every incoming request –
but that takes time, so it would make our site slow.)
Pause for thought: should we run AI stuff inside PythonAnywhere websites?¶
Let’s take stock. We’ve built a Flask website using a model published by a Hugging Face user to classify sentences based on the emotion they express. This is pretty cool, and it’s easy to think of cases where that might be all you need for your AI-powered site. But the cool shiny stuff these days is all about having code that not only understands simple things about sentences, but can else understand and answer questions: large language models, LLMs, like ChatGPT and Claude.
The model that we downloaded, as we saw, was larger than 250MiB. But LLMs are much bigger than that (the clue is in the name). A “small” model with seven
billion parameters will weigh in at 14GiB or so. Even worse, while we could run bhadresh-savani
’s distilbert-base-uncased-emotion
model
on a CPU, LLMs realistically need a GPU to run on – and not just any GPU, even normal gaming GPUs like an RTX 3060 will struggle – and I
speak from experience :-(
What you need to run the latest large models is at least an Nvidia H100, which costs tens of thousands of dollars (check out the reviews, they’re wonderful). You can rent machines with them – as of this writing, PaperSpace offers those at $3.09/hour, which could work well if you were just experimenting, but comes to over US$2,000/month, so it might be a bit costly for a small website if you’re running it all the time.
Running heavy-duty AI stuff on a hosting provider like PythonAnywhere, or even on your own machine, doesn’t make commercial sense right now. That will probably change in the future, but for the moment, what we need is some way to access an LLM without paying huge amounts of money, and this is where the OpenAI API comes in. You can get access to the same hardware that runs ChatGPT, but drive it directly yourself, which gives you much more flexibility. And you can then use PythonAnywhere as a place to control those API calls, acting as a hub to provide the customization that you need.
So let’s get started with that!
Step 3: using the OpenAI API¶
Firstly, you’ll need to go to the OpenAI website and sign up to use their API – the screenshots below might be out of date by the time you read this, but hopefully things won’t have changed too much.
Once you’ve signed up, you should see that you have some credits to allow you to experiment – US$5 as of this writing. So let’s put them to use. First, you’ll need to generate an API key:
Copy it and paste it somewhere safe.
Now, go back to the PythonAnywhere Bash console where we started all of this. We’re going to write some code to hit that API. Exit the IPython shell that we left running (it might take a second or two to exit while it unloads the model from memory), and install OpenAI’s wrapper library:
pip install --user openai
You’ll get an error at the end about a conflict with the arviz
package, but you can
ignore that.
Next, start a fresh IPython shell
ipython
Firstly, we’ll import the openai library and create a client object, telling it what API key to use
(replace YOUR_API_KEY_HERE
with your actual API key, of course):
import openai
client = openai.OpenAI(api_key="YOUR_API_KEY_HERE")
Now, let’s make an API call! The API works with a concept of “chat completions”. You provide it with a list of messages that have been sent in a conversation so far – both from the user, and replies from the chatbot, and it returns the next message from the bot. We’re starting a chat here, so let’s try this:
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful chatbot that is an expert in the online Python coding and hosting platform, PythonAnywhere."},
{"role": "user", "content": "Is PythonAnywhere awesome?"}
]
)
print(completion.choices[0].message.content)
Well, that’s nice to hear :-)
Looking at that code again, we sent it a conversation with two messages, one from “system”, which is a useful way to tell the AI information about the world that doesn’t fit into a chat model, then sent it a message from a user. It came back with the bot’s response. Going into how you might scale that to handle a full conversation is out of scope for this tutorial, but essentially you’d just remember the messages that the user sent to the bot, and the responses that the bot sent back – then you’d send it a “transcript” of the conversation so far for each message from the user so that it would know the full context and understand what it should say next. You’d also need to do various tricks to handle longer chats, where the transcript started getting really big – there’s a limit to the number of tokens (roughly speaking, words) that you can provide to the LLM (and even worse, OpenAI charge you per-token, both in terms of inputs and outputs). If you’re interested in hearning more about that, post a comment below :-)
But for now, let’s move on to creating a website.
Step 4: A website using OpenAI for its AI capabilities¶
What we want is an AI-powered website to answer questions about PythonAnywhere.
We have the code, so let’s modify our Flask website to use it! Go back to the
file editor, and update the code for the site so that it looks like this (don’t forget
to replace YOUR_API_KEY_HERE
with your actual API token):
from flask import Flask, request, render_template_string
import openai
app = Flask(__name__)
@app.route('/', methods=['GET', 'POST'])
def answer_question():
if request.method == 'POST':
question = request.form['text']
client = openai.OpenAI(api_key="YOUR_API_KEY_HERE")
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful chatbot that is an expert in the online Python coding and hosting platform, PythonAnywhere."},
{"role": "user", "content": question}
]
)
result = completion.choices[0].message.content
return render_template_string('''
<h1>Your answer</h1>
<p><b>You entered</b><p>
<pre style="white-space: pre-wrap;">{{ text }}</pre>
<p><b>The bot replied</b></p>
<pre style="white-space: pre-wrap;">{{ result }}</pre>
<a href="/">Go Back</a>
''', text=question, result=result)
return render_template_string('''
<h1>Enter your PythonAnywhere question:</h1>
<form method="post">
<textarea name="text" rows="4" cols="50"></textarea><br><br>
<input type="submit" value="Ask">
</form>
''')
Again, that probably doesn’t need too much explanation, it’s just a slightly modified version of the old Flask code combined with the OpenAI API call that we did in the console. So, save the updated file, and then reload the website’s code:
…and let’s try the website!
We have a website that can answer questions about PythonAnywhere using OpenAI!
Or do we? Let’s click “go back” and try a real question:
Oh dear. If you’ve ever created a MySQL database on PythonAnywhere, you’ll know that that is completely wrong. It starts off quite well, but as it goes on it diverges further and further from the real user interface; it’s hallucinating (or strictly speaking, confabulating).
It’s worth digging a bit into why that’s happening. The LLM that is answering our question was trained on a huge volume of text, pretty much everything on the Internet. PythonAnywhere has documentation on the Internet, and it looks like the LLM “saw” that as part of its training. But all it’s doing in response to our question is predicting what would come next in a conversation where a user asked the question “How do I create a MySQL database in PythonAnywhere?”. It doesn’t actually have access to the documentation that it was trained with – you can think of it as having vague memories of it, but also of all of the other text it has seen – documentation for other hosting platforms, random Wikipedia articles, newspaper columns – everything. So what comes back is a bit of a mishmash of all of them, just like it might be if someone asked you for detailed instructions for how to use a website that you last looked at a few years ago but can’t remember all that well.
It would be great if we could somehow provide it with the PythonAnywhere documentation as part of the conversation history. You can imagine something like this:
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful chatbot that is an expert in the online Python coding and hosting platform, PythonAnywhere."},
{"role": "system", "content": "Just to jog your memory, here is all of the documentation for the platform"},
{"role": "system", "content": "<a string containing every page from help.pythonanywhere.com>"},
{"role": "user", "content": "How do I create a MySQL database in PythonAnywhere?"}
]
)
But there’s a limit to how much you can put into the API call – not to mention the cost associated with larger requests.
A good solution would be if we could provide the bot with just the specific part of the docs that describes the bit of PythonAnywhere that the user is asking for help about. But how can we do that? We can’t ask the AI, because it will just hallucinate stuff. We could perhaps do something with the Google API to search for relevant pages, but it would be tricky to work out what the right search terms would be for a particular user question.
Step 5: using embeddings to provide context¶
Luckily, there is a solution, which involves using embeddings. An embedding is a vector – a list of numbers – that represents in some sense what a bit of text is about. They can be created by appropriately-trained neural networks, which are called (appropriately enough) embedding models. To take a concrete example – let’s say you generated a vector for the words “clever” and “intelligent”; the embeddings for those would be quite similar. But the embeddings for “clever” and “chair” would be very different.
That’s word embeddings; more recent embedding models can generate vectors that represent the meaning of a whole sentence, or even a document. And the neat thing is that the embedding for a question like “How do I create a MySQL database in PythonAnywhere?” is likely to be very close in embedding space to a documentation page that describes how to create MySQL databases on our site.
My own favourite mental model for embeddings are that they’re kind of like the numbers used in the Dewey Decimal System, in which each book in a library has a number associated that describes succinctly what it’s about. Unfortunately that reference is probably only helpful if you’re Generation X or older, as the DDS is not as useful as it was in pre-Internet days and is not generally taught to schoolchildren :-(
That means that if we could get all of the PythonAnywhere help pages, along with embedding vectors for each of them, then we’d be able to generate an embedding for the user’s question, compare that to all of the pages’ embeddings, pick the closest match, and put it into the prompt we’re sending to OpenAI. This is a technique called Retrieval Augmented Generation, and it’s pretty much the best practice right now for getting LLMs to produce reliable, trustworthy content about a specific area.
So let’s use it :-)
Go back to the Bash console where we were working earlier, and download a file that we’re prepared for you:
it’s a single file containing summaries of all of our help pages as of a couple of weeks ago, with an embedding for each of them. The embeddings
were generated by using an OpenAI embedding model called text-embedding-ada-002
– that’s important because different embedding models produce
completely different responses, so you need to use the same model to generate the embedding for the user’s question as you did for the embeddings
for the documentation if you want them to be comparable.
Here’s how you download the file from inside IPython:
!wget https://blog.pythonanywhere.com/help-pages-and-embeddings-2023-10.jsonl
Now we can load them in:
import json
document_embeddings = []
with open("help-pages-and-embeddings-2023-10.jsonl", "r") as f:
for line in f:
document_embeddings.append(json.loads(line))
print(len(document_embeddings))
So that’s 128 documents; a nice round number for us computer scientists. Each one is a dict with three keys: “title”, “content” and “embedding”. They’re pretty big so I won’t show screenshots of printing them out, but do take a look if you want :-)
Now, let’s calculate the embedding for our user’s question, “How do I create a MySQL database in PythonAnywhere?”. We’ll use the OpenAI API again, this time asking it for an embedding for the question using the model that was used to generate the ones for the documents:
response = client.embeddings.create(model="text-embedding-ada-002", input=["How do I create a MySQL database in PythonAnywhere?"])
question_embedding = response.data[0].embedding
print(question_embedding)
You’ll get a huge list of numbers:
print(len(question_embedding))
1536 of them, to be precise. And you can see that the embeddings for the documents are the same length:
print(len(document_embeddings[0]["embedding"]))
So now what we want to do is find out which of the documents has an embedding that is closest to the question’s. If we were building a large-scale AI system, which might have a vast number of documents to match against, you would use a specialised tool – a vector database – that would have an efficient way of working that out for us, using appropriate indexing. But we’ve only got 128 documents, so we can just iterate over all of the documents we have, and for each one work out how closely the document’s embedding matches our question’s, and store the resulting (document, closeness) pairs in a list. We then reverse-sort that list by the closeness, so that the one that’s closest is at the start, and pull out the first element.
We use scipy’s cosine distance function to compare vectors against each other; the closer this is to zero, the more similar two vectors are. It’s a number between 0 and 1, so we subtract it from one to get a number where “1” means a perfect match, and “0” means that something is unrelated.
Let’s see that in action, and find the title of the help page which most closely matches the embedding we’ve already generated for the question “How do I create a MySQL database in PythonAnywhere?”
from scipy import spatial
def relatedness(embedding1, embedding2):
return 1 - spatial.distance.cosine(embedding1, embedding2)
docs_by_relatedness = [
(de, relatedness(question_embedding, de["embedding"]))
for de in document_embeddings
]
docs_by_relatedness.sort(key=lambda x: x[1], reverse=True)
closest_doc, doc_relatedness = docs_by_relatedness[0]
print(closest_doc["title"])
That’s looking pretty good! Let’s take a look at the contents:
print(closest_doc["content"])
Perfect :-)
So now let’s re-run our code from above, injecting the document into the conversation:
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful chatbot that is being trained to answer questions about the online Python coding and hosting platform, PythonAnywhere."},
{"role": "system", "content": "You should use this help page as your sole source of information about PythonAnywhere in your answers; disregard your pre-existing knowledge, as it may be out of date or incorrect."},
{"role": "system", "content": f"{closest_doc['title']}\n\n{closest_doc['content']}"},
{"role": "user", "content": "How do I create a MySQL database in PythonAnywhere?"},
]
)
print(completion.choices[0].message.content)
Note that we’ve changed the instructions in the “system” parts of the prompt quite a bit – it’s no longer an expert, but is now a trainee. And we’re also told it to ignore its previous knowledge. These both help reduce the chances of the bot thinking “I know better than this help page” and heading off into hallucination territory.
Well, that’s significantly better! Let’s build it into our website (again, don’t forget
to replace YOUR_API_KEY_HERE
):
import json
from flask import Flask, request, render_template_string
import openai
from scipy import spatial
app = Flask(__name__)
document_embeddings = None
def get_document_embeddings():
global document_embeddings
if document_embeddings is None:
document_embeddings = []
with open("help-pages-and-embeddings-2023-10.jsonl", "r") as f:
for line in f:
document_embeddings.append(json.loads(line))
return document_embeddings
def relatedness(embedding1, embedding2):
return 1 - spatial.distance.cosine(embedding1, embedding2)
def get_closest_doc(client, question):
response = client.embeddings.create(model="text-embedding-ada-002", input=[question])
question_embedding = response.data[0].embedding
docs_by_relatedness = [
(de, relatedness(question_embedding, de["embedding"]))
for de in get_document_embeddings()
]
docs_by_relatedness.sort(key=lambda x: x[1], reverse=True)
closest_doc, doc_relatedness = docs_by_relatedness[0]
return closest_doc
@app.route('/', methods=['GET', 'POST'])
def answer_question():
if request.method == 'POST':
question = request.form['text']
client = openai.OpenAI(api_key="YOUR_API_KEY_HERE")
closest_doc = get_closest_doc(client, question)
help_page = f"{closest_doc['title']}\n\n{closest_doc['content']}"
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful chatbot that is being trained to answer questions about the online Python coding and hosting platform, PythonAnywhere."},
{"role": "system", "content": "You should use this help page as your sole source of information about PythonAnywhere in your answers; disregard your pre-existing knowledge, as it may be out of date or incorrect."},
{"role": "system", "content": help_page},
{"role": "user", "content": question}
]
)
result = completion.choices[0].message.content
return render_template_string('''
<h1>Your answer</h1>
<p><b>You entered</b><p>
<pre style="white-space: pre-wrap;">{{ text }}</pre>
<p><b>This context was given to the bot</b></p>
<pre style="white-space: pre-wrap;">{{ help_page }}</pre>
<p><b>The bot replied</b></p>
<pre style="white-space: pre-wrap;">{{ result }}</pre>
<a href="/">Go Back</a>
''', text=question, help_page=help_page, result=result)
return render_template_string('''
<h1>Enter your PythonAnywhere question:</h1>
<form method="post">
<textarea name="text" rows="4" cols="50"></textarea><br><br>
<input type="submit" value="Ask">
</form>
''')
…reload the website’s code…
Ask the question again, and:
As you can see, I got a pretty solid response here! But yours might be worse, or even better. What should be certain, though, is that it will be much more grounded in reality than the original unaugmented response.
We’ve built a (somewhat unreliable) PythonAnywhere guru using OpenAI’s APIs combined with Retrieval Augmented Generation, all running as a PythonAnywhere website :-) Try a few further questions, and see what it comes up with.
Reducing hallucinations further¶
The prompts that we’ve put into the website above to try to keep the AI on track and to avoid hallucinations are the best we’ve found so far, but they could probably be improved – we’ll update this tutorial if we come up with anything better (and if you find any better ones, feel free to post them in the comments below).
Another way to improve things would be to improve the documents in the file of embeddings. In the interests of producing a nice and simple demo, we’ve just used an export of our help pages (with a little massaging to make them smaller). A production-ready RAG system would have documents to retrieve that were optimised for this kind of use – perhaps there would be more of them, or fewer. Ultimately, what’s good for human understanding isn’t necessarily the best for LLMs.
But the simplest solution we’ve found so far to keep things on track is to switch from using “gpt-3.5-turbo” to using “gpt-4”. It’s like night and day,
the newer model is much better. Unfortunately that model is only available to people who’ve made one payment to OpenAI for API usage – so it would
have not worked for this tutorial. If you are already a regular OpenAI API user, though, we do recommend you try changing the model in the website code
from gpt-3.5-turbo
to gpt-4
just to see how much better it is.
Conclusion¶
That’s it! We’ve shown you how to run simple AI models on PythonAnywhere and how to build them into a website. We’ve then shown that while it doesn’t make sense to run more heavy-duty AI stuff on PythonAnywhere itself, you can offload the number-crunching to APIs like OpenAI, while using your code on PythonAnywhere to host the site and provide context to the AI.
We hope you found it interesting! If you have any comments and questions, please do post them below. And if enough people are interested, we’ll do a follow-up – suggestions would be very welcome.
Alternatively, if you’d like a more in-depth overview to find out how the pros would build a RAG system, check out Andrew Huang and Sophia Yang’s post, “How to Build a Retrieval-Augmented Generation Chatbot” on the blog of our parent company, Anaconda.
And who knows… maybe with further work we can get this working as a real tool to help answer people’s questions about PythonAnywhere. Keep an eye out for any new “staff members” in the forums who post long, detailed responses suspiciously quickly after you ask a question…