Q&A: Navigating AI in Gaming Compliance - Insights and Strategies
Q&A: Navigating AI in Gaming Compliance - Insights and Strategies
Q&A: Navigating AI in Gaming Compliance - Insights and Strategies
Q&A: Navigating AI in Gaming Compliance - Insights and Strategies
Q&A: Navigating AI in Gaming Compliance - Insights and Strategies
Q&A: Navigating AI in Gaming Compliance - Insights and Strategies
In the world of AML compliance, keeping up with technology is key. With AI and machine learning becoming big buzzwords, it's natural for those in the gaming industry to wonder how these tools fit into their daily work. At Kinectify, we're thoughtfully applying AI to meet the unique challenges of compliance in gaming.
In this blog post, we answer some commonly asked questions about using AI in AML compliance including questions about choosing the right AI tools that keep your data safe as well as questions about ensuring AI tools are ready for the rigors of compliance work. We also touch on how Kinectify customizes AI to suit the gaming industry and how we address gaming errors and data privacy along the way.
Can you share some of the AI tools that AML compliance folks can use for their day to day tasks?
We recommend being careful when selecting AI tools in a compliance or risk management setting because many of the available tools like ChatGPT have terms and conditions that allow them to store and use the data you input. Therefore, you are trusting the security of your data and allowing its use by a third-party in accordance with the terms and conditions of the third-party.
We do recommend carefully selecting third-party vendors who are SOC 2 Type 2 compliant and ensuring they anonymize the data they will use for training their AI models. This better ensures security and protects customer identity.
What's the strategy for data readiness? Are we confident in the quality of the data for training?
In many cases, AI models are only as good as the data they are trained on. It is imperative, for the types of AI models that rely on supervised training, that data goes through proper cleansing, deduplication, and anonymization processes prior to using it to train production models. At Kinectify, we use several strategies for ensuring our data is clean and useful before it makes its way into our training systems. For example, we have a settlement period where data is considered not available for analysis, giving our organizations time to make changes before we perform any analysis on that data. In addition, we have sophisticated data ingestion processes that only allow clean, complete, and well-formatted data into our system. Finally, our data science team is very careful about how they perform sampling and what data is selected for use in training our models. We ensure we have sufficient data and that it is representative of our market before we use it for training.
In EU there is a strict GDPR regulation. How do you deal with the legalities and implications when applying AI technologies and automate patron data analysis?
This depends on the use-case and country’s laws that might provide safe harbor from the standard GDPR regulation. For example, in the US, the US Patriot Act and Bank Secrecy Act provide safe harbor for organizations to comply with these laws, which includes using all available information to ensure compliance and the use of automated systems to facilitate compliance. Deploying AI in non-US jurisdictions covered by GDPR might entail certain customer notices or a legal review by an attorney to let you know if your use-case is permitted under AML-related laws/regulations despite GDPR provisions or if customer notifications are required. We would also recommend anonymizing the customer data for training the AI models, which may help resolve issues with GDPR.
Does using a cloud compliance vendor mean gaming operators should have our data on the cloud prior?
No. We recommend the cloud for security, cost-savings, and increased capabilities. However, you don’t need to migrate your organization to the cloud to take advantage of AI because you can send your data to a 3rd party provider and leverage their cloud capabilities. Using a 3rd party provider also allows you to use AI models which have been trained from industry-wide data making them more accurate and precise than using just your organization’s data.
For organizations that have concerns with control or sovereignty over their data in the cloud, cloud providers have solutions using confidential computing and other methods to help alleviate that concern. The solutions have already been used for nations operating in foreign governments (i.e. US embassy in Beijing, international law enforcement task forces). These give organizations and governments a greater level of control over the data and even help maintain sovereignty over how data is accessed and who can access it, no matter the circumstances.
Does Kinectify use an OpenAI application as the engine to your product?
OpenAI models are somewhat broad and "heavy" for AML specific use cases. Kinectify prefers to use and customize more nimble models trained on domain specific language (i.e. gaming industry specific language). Kinectify used OpenAI for evaluation and proof of concept (POC) purposes, but decided to create proprietary models trained off industry specific data. We recently published a blog article about the technologies we use: https://hubs.ly/Q026VNJq0
Is Kinectify’s tool trained on Casino and Gaming AML data, or is it trained on financial institution AML data?
The Kinectify system is wholly-training on casino and gaming AML data. This includes both land-based and online gaming. We recommend partnering with providers with AI models trained on gaming data for transaction monitoring and other functions reliant on gaming data.
How is data privacy and confidentiality maintained while using open source or other shared AI tools, for example, with the generative AI that helps automate Suspicious Activity Report (SAR) narratives?
Open-source tools, in and of themselves, do not mean that data is shared with third-parties or that privacy is compromised. It is possible to use open-source tools, public models, etc. without data leaving your environment, depending on the tools you are using. At Kinectify, we are open-source advocates and support open-source libraries whenever possible, but that doesn’t mean that we run confidential data through open-source 3rd parties. We would not recommend using public tools, even public commercial tools, for evaluating confidential data without very careful consideration and risk analysis.
Kinectify uses open-source tools to build our own AI/ML models, and we use publicly trained models to enhance our proprietary systems, but we do not share confidential data outside our platform for AI/ML training purposes.
How does Kinectify handle the inevitable AI errors, especially hallucinations that are difficult to identify?
We recommend following established industry best practices for GenAI lifecycle management. This includes rigorous prompt engineering and LLM fine tuning in an iterative mode until the target performance criteria are met. These best practices also include a robust MLOps delivery pipeline that measures the effectiveness of models prior to release and ensures they don’t regress in performance or accuracy when deploying new models to production.
To specifically address hallucinations, the recommended process is to deliver suggestions using GenAI that users can modify as needed. Capture the originally generated content and compare it to the final submissions, and have alerts in place if the suggestions are not meeting the user’s needs.
What types of methods are being utilized to verify gaming patrons identities and what technology would Kinectify like to use to improve this?
The Kinectify platform is not involved in the identity verification process during customer onboarding (that is usually the responsibility of the player management system). However, we do perform enhanced due diligence or Know Your Customer (KYC) reporting through the platform, and I think that may be what you’re asking. The Kinectify platform aggregates large volumes of data about players from across many data providers. Although this information is targeted to the KYC Reviews for the gaming industry, and we carefully tune our searching utilities to ensure we are grabbing AML relevant data, it does mean many articles and pieces of evidence that have to be reviewed. We are using AI/ML to classify this data and ensure that the most important information is surfaced to analyst.
Another identity verification technology we have in place concerns watchlist and adverse news monitoring. We have sophisticated artificial intelligence providing accurate and precise data for watchlist screening, ensuring that organizations do not have to weed through large volumes of false positives.
One major roadblock to organizations purchasing AI solutions is the lack of AI procurement standards, policies & guidelines and an AI Governance Framework. Have vendors recognized a lack of AI Governance Framework as a major barrier to making sales and/or have they considered how these frameworks can help gaming operators overcome these hurdles given that they are a major barrier to procurement?
AI procurement standards are still evolving, but there is a growing consensus on some key guardrails, including:
Transparency and accountability: Organizations should be clear about their goals for using AI and how they will evaluate the performance of AI systems. They should also be accountable for the ethical and responsible use of AI.
Fairness and equity: AI systems should be designed to be fair and equitable, and organizations should take steps to mitigate any potential modeling bias.
Security and privacy: AI systems should be secure and protect the privacy of users.
Human oversight: AI systems should be subject to human oversight and accountability.
Monitoring and evaluation: Organizations should monitor and evaluate the performance of the AI solution to ensure that it is meeting their needs and that it is being used in a responsible and ethical manner.
As the field of AI continues to evolve, AI procurement standards are likely to become more developed in acquiring AI solutions that are responsible, ethical, and effective. We are working with leading groups involved in establishing industry standards and guidance on procurement frameworks, as you mentioned. We hope to see these release to the industry sometime in 2024 to assist organizations in procuring AI capabilities.
Have more questions on AI's role in gaming or have insights to share? We'd love to hear from you - get in touch with us here.
ABOUT KINECTIFY
Kinectify is an AML risk management technology company serving gaming operators both in the US and Canada. Our modern AML platform seamlessly integrates all of the organization's data into a single view and workflow empowering gaming companies to efficiently manage risk across their enterprise. In addition, Kinectify's advisory services enhance gaming operators' capacity with industry experts who can design and test programs, meet compliance deadlines, and even provide outsource services for the day-to-day administration of compliance programs.
To learn more about Kinectify and book a demo, click here.
Get the White Paper
Start growing
Discover how Kinectify can clear the way for you to scale your business.