AskMax - Our Lessons Learned

AskMax - Our Lessons Learned

10 June 2025 | John Small

The rapid proliferation of generative AI has led many industries to experiment with customised chatbot solutions and regulatory compliance is no exception. The creation of the Altair Group’s AI chatbot AskMax.je (or Max as he is better known) came about through a combination of Cyan’s desire to bring technology and innovation to regulatory compliance, a knowledgeable technical partner in Blue Llama and a significant amount of hard work and testing.

AI-driven tools promise efficiency gains, cost savings, and enhanced user experiences. However, from our experience, organisations diving into this technology should be aware that deploying an industry-specific generative AI chatbot comes with complexities and potential pitfalls which threaten the advantages if not managed effectively.

For companies setting out on their own AI journey, we felt it might be helpful to speak about some of the lessons we learned while creating and launching Max. It should be noted that this article focusses on our creation of a generative AI based chatbot, but there are points which are common to any AI related implementation.

Cost/Benefit Analysis

We conducted a significant amount of research before embarking on the creation of Max.  One of the key considerations was to weigh up if the financial and resource costs would outweigh the potential benefits of undertaking the project. We created Max primarily to help industry, but also to help educate ourselves on this emerging technology. We needed to be sure our investment would meet these aims successfully and that costs such as token exchange (used to measure the size of the prompts and responses and associate a cost) would be manageable.

We limited the value available for tokens in our model as a very rudimentary way of preventing malicious users spamming the system and also limiting possible costs. By setting a boundary on this value, it meant that if a user or bot sent a vast amount of requests, Max would simply stop responding and we would get an alert. This was an easy way to mitigate several risks at once.

Define the Scope (and Stick to It)

When we created Max, there was potential to cover a huge range of financial services regulation, but we decided to focus purely on the AML/CFT/CPF regime in Jersey. We felt that the complexities of adding the Codes of Practice for difference licence types had the potential to confuse the chatbot and generate answers which might not be accurate. By limiting our scope and loading only the AML/CFT/CPF Handbook, legislation and guidance, we were able to judge the performance of the model much more easily in testing. This would also form a solid knowledge base had we wanted to later expand Max’s knowledge to other regimes.

We also wanted to ensure users of the site were clear in what Max was intended to do. We made it very clear this was an experiment rather than a complete product. This was reinforced by using pop-ups and acceptance boxes so that all users were very clear that the responses were not to be relied on and should be used only as a guide. This also gave reassurance that should someone try to blame Max’s output for an error, there was clear evidence that they must have been aware the responses needed to be checked.

Hallucinations and Misinformation

One of the most significant issues with AI chatbots is the potential for the model to generate plausible sounding but incorrect or fabricated information. We viewed this as a key risk when creating Max as even a small inaccuracy in relation to financial services regulation could lead to serious issues.

During testing we found that Max was no exception to the propensity to hallucinate. From our inputs, we discovered that some of the complexities presented by the local regulatory regime were confusing Max into providing some answers which were anywhere between misleading and wrong.

To combat this, we worked with the experts at Blue Llama to structure a curated knowledge base which worked alongside the information already fed to the model. By creating this source of refined information and weighting it favourably within the model, we were able to adjust Max’s output to be much more accurate.

Since then, we have been able to review the questions asked and audit the output. If we found inaccuracies, we added correct information to the curated data to correct it. This has proven very successful so far.

Users can be Mischievous

We wanted to encourage as many people as possible to use Max so made it free to use and anonymous. As a result, we could only see what was asked and the responses Max gave. We reviewed them frequently to ensure that the model was behaving, and Max wasn’t going rogue in his guidance. What we couldn’t account for however, was the mischievousness of users and, on occasion, blatant attempts to corrupt poor Max with their requests. In the first few weeks we saw everything from asking if he spoke Spanish through to trying to trick him into saying how can someone hide the proceeds of crime.

Fortunately, Max was very well trained (and tested) and declined to answer issues which could be considered off topic, evil or illegal. This was achieved through the careful adjustment of the ‘temperature’ of the responses through the training period (the ‘temperature’ being the amount of leeway or creativity the chatbot can exercise when generating responses). We wanted to ensure he was helpful, but not overly so when topics unrelated to his primary purpose of AML/CFT/CPF were concerned.

If you were considering deploying a chatbot, we would recommend you consider how ‘warm’ (i.e. helpful) your model should be. Keep in mind that the warmer the response, the more willing the chatbot might be to help users, but may be tempted to extend beyond its subject matter training, presenting a risk that incorrect answers could lead to reputational damage. Equally, set the temperature too cold and the responses become short, blunt and often unhelpful.

Data Privacy and Security

How you gather and process user data is another key consideration when operating any website and the introduction of AI adds another level of complexity. It is essential that any website meets the requirements of the Data Protection Law and follows best practice. We are fortunate in Jersey to have the Jersey Office of the Information Commissioner which provides excellent tools and guidance on their website.

We took expert legal advice to ensure that key documents were suitable for the running of such a website and also ensured we completed a Data Protection Impact Assessment before going live to try to anticipate and mitigate risks which related to the use of the AskMax.je website.

On the basis of this consideration, we decided that it would be most appropriate to ensure the data collected when using Max was limited to the question asked and the response provided. We also wanted this to provide an element of trust for users that, beyond our obligations to report anything serious to the authorities, they could ask any question that they may have been afraid to ask elsewhere.

Maintenance and Monitoring

The information we added to Max was a snapshot in time so, since the site went live, we have needed to ensure that updates are conducted frequently to ensure the knowledge remains current.

This requires vigilance in terms of being aware of updates to the training information, but also time and resource to ensure that responses remain accurate and that the user interface is performing as it should.

This administration should not be underestimated as it can be time consuming and labour intensive. The amount of time and effort required to maintain your chatbot should factor into your consideration on whether it is a practical solution for your proposed purpose.

Conclusion

We are incredibly proud of Max and his continued support of people in financial services compliance. He has fulfilled his original purpose and we hope will be seen as something genuinely transformative. Since Max was launched, the JFSC’s own chatbot Reggie has been developed and made available to industry. It’s fantastic to see more organisations realise the benefits that supportive AI tools like this can bring.

Importantly for Altair, we have taken the learnings from Max and built on them in the AI features in our comprehensive governance, risk and compliance tool Beacon alongside our own internal innovation. By experimenting with, testing and being open to potential AI use cases, we feel that our products and services will benefit from reflecting on way Max was brought into existence.

AI features can and will unlock tremendous value, but only when introduced with a clear-eyed understanding of the associated risks. From technical safeguards to organizational alignment, the path to success involves meticulous planning, ongoing stewardship, and a strong ethical compass. It’s also essential to recognise any areas where your organisation may lack relevant expertise and seek support where needed – our partnership with Blue Llama was fundamental to our success in developing Max and they have been great to work with.

We hope that by learning from us as early adopters and embracing an open-minded approach to the risks and benefits AI tools can bring, organisations can avoid early pitfalls and harness effective AI solutions.


Contact Us


5 Anley Street, St Helier, Jersey, Channel Islands, JE2 3QE