Boost Productivity: AI Summarization Hacks

Boost Productivity: AI Summarization Hacks Hi and welcome to this video. Today, we are back with Julia who has about 50 feedbacks from her customers about her new product. The feedbacks are stored in a Google sheet, and we can see that they are all in a text format. So, let's now have a look at how Julia can summarize these feedbacks and get actionable insights without spending hours going through each comment. I'll go to the scenario builder, and the first module that I have to look for is Google Sheets..

Boost Productivity: AI Summarization Hacks

Then, I will type get range values, which is a useful module for getting all values from my sheet. Let's see how we can set it up. I'll adjust the first dropdown by selecting from the list and proceed to choose a drive. I know the file has been shared with me, so I'll go on to specify that option. I'll then choose my sheet name and type the appropriate range. I mentioned there are about 50 feedbacks, so this range should give us sufficient space..

I do have headers, so I specify the right cell, and that's it. Let's test it out. Amazing. I can see my feedbacks are logging in, and I can see that I have about 50 bundles. That sounds about right. Let's continue to add another module. Now, when we look at our output from the previous module, I can see that I have 50 bundles. What we've been doing previously is to pass one by one to our Open AI module for things.

Like autocompletion and enrichment. But now, we want to take all of the feedbacks together and only then pass it to AI. So, to do that, I need to deploy a tool called text aggregator. What text aggregator does is it takes each of the small bundles and it combines them together, essentially creating just one very long text with all of the feedbacks. I can see that there are only two fields I need to set up. In the first one,.

I need to specify from which module my values are coming from, but I can see this has already been done for me. In the second, I just map the right column that contains the feedbacks I want to aggregate. That's all. Let's run the scenario to see if it works. Here at the bottom, I can see that as an output I'm getting this long list of feedbacks. This is exactly what I wanted, so it seems everything is working fine. Let's now connect Open AI to process.

These feedbacks. So, I will look for the module called create a completion, and I can see that my connection is all set up, which is great. If you are struggling with how to set up your connection, I invite you to head back to one of our earlier videos where we go into more detail. Now, about selecting the right method, you can just leave the chat completion on and go on to pick a model. I can choose, for example, chat GPT4 Omni, one of the newer models. Let's now.

Head to the prompt. I first add a message and choose the role of a user. Previously, we have used the prompt completion method, which only had one window for the message, and we mentioned that your text in that window is exactly the same as when you write to Chat GPT. Well, when you select a user role in this case, that's actually still the same thing. So, as the message content, I will copy-paste my simple prompt there. I'm asking chat GPT to.

Analyze the following aggregated feedback related to a new product and to identify the top five most important and recurring themes or issues mentioned in the feedback. It should present the findings in a particular format, and I give it an example of the format I expect. At the bottom, I pass the text from my aggregator in the previous module. In summary, what you can see me doing in the prompt: I give it first a little bit of context, second, I give it a task, and third,.

I give it an example of the format that I'm looking for before giving it the raw feedback. Now, I just need to specify the number of tokens. Up to now, we've been working with quite small numbers of tokens just because we were analyzing items one by one. In this case, we have a pretty long text, and you might also want a longer text returned, so in our case, the number of tokens needs to be much higher. Let's put 4,000 tokens. The reason for this.

Number is that some of the better models, such as chat GPT4 or chat GPT4 Omni, have their token window limited to slightly above that number. All right, that's all. Let's give it a go. I can see that the scenario is running. It's taking a little bit more time just because our prompt is very long. Great, I can see I'm getting a result. I can see that one of the first themes of my feedback is durability issues, which might.

Posts Related:

Previous Post Next Post