MongoDB Applied

Customer stories, use cases and experience

Unmasking Deception: Harnessing the Power of MongoDB Atlas and Amazon SageMaker Canvas for Fraud Detection

Financial services organizations face growing risks from cybercriminals. High-profile hacks and fraudulent transactions undermine faith in the industry. As technology evolves, so do the techniques employed by these perpetrators, making the battle against fraud a perpetual challenge. Existing fraud detection systems often grapple with a critical limitation: relying on stale data. In a fast-paced and ever-evolving landscape, relying solely on historical information is akin to driving by looking into the rearview mirror. Cybercriminals continuously adapt their tactics, forcing financial institutions to stay one step ahead. The newest tactics often can be seen in the data. That's where the power of operational data comes into play. By harnessing real-time data, fraud detection models can be trained on the most accurate and relevant clues available. MongoDB Atlas, a highly scalable and flexible developer data platform, coupled with Amazon SageMaker Canvas, an advanced machine learning tool, presents a groundbreaking opportunity to revolutionize fraud detection. By leveraging operational data, this synergy holds the key to proactively identifying and combating fraudulent activities, enabling financial institutions to safeguard their systems and protect their customers in an increasingly treacherous digital landscape. MongoDB Atlas MongoDB Atlas , the developer data platform is an integrated suite of data services centered around a cloud database designed to accelerate and simplify how developers build with data. MongoDB Atlas's document-oriented architecture is a game-changer for financial services organizations. Its ability to handle massive amounts of data in a flexible schema empowers financial institutions to effortlessly capture, store, and process high-volume transactional data in real-time. This means that every transaction, every interaction, and every piece of operational data can be seamlessly integrated into the fraud detection pipeline, ensuring that the models are continuously trained on the most current and relevant information available. With MongoDB Atlas, financial institutions gain an unrivaled advantage in their fight against fraud, unleashing the full potential of operational data to create a robust and proactive defense system. Amazon SageMaker Canvas Amazon SageMaker Canvas revolutionizes the way business analysts leverage AI/ML solutions by offering a powerful no-code platform. Traditionally, implementing AI/ML models required specialized technical expertise, making it inaccessible for many business analysts. However, SageMaker Canvas eliminates this barrier by providing a visual point-and-click interface to generate accurate ML predictions for classification, regression, forecasting, natural language processing (NLP), and computer vision (CV). SageMaker Canvas empowers business analysts to unlock valuable insights, make data-driven decisions, and harness the power of AI without being hindered by technical complexities. It boosts collaboration between business analysts and data scientists by sharing, reviewing, and updating ML models across tools. It brings the realm of AI/ML within reach, allowing analysts to explore new frontiers and drive innovation within their organizations. Reference Architecture The above reference architecture includes an end-to-end solution for detecting different types of fraud in the banking sector, including card fraud detection, identity theft detection, account takeover detection, money laundering detection, consumer fraud detection, insider fraud detection and mobile banking fraud detection to name a few. The architecture diagram shown here illustrates model training and near real-time inference. The operational data stored in MongoDB Atlas is written to the Amazon S3 bucket using the Triggers feature in Atlas Application Services. Thus stored, data is used to create and train the model in Amazon SageMaker Canvas. The SageMaker Canvas stores the metadata for the model in the S3 bucket and exposes the model endpoint for inference. For step-by-step instructions on how to build the fraud detections solution mentioned above with MongoDB Atlas and Amazon SageMaker Canvas, read our tutorial .

June 21, 2023
Applied

MongoDB and BigID Delivering Scalable Data Privacy Compliance for Financial Services

Ensuring data privacy compliance has become a critical priority for banks and financial services. Safeguarding customer data is not only crucial for maintaining trust and reputation but also a legal and ethical obligation. In this blog, we will dive into why and how the financial services industry can adopt an approach to data privacy compliance effectively using BigID and MongoDB. Embracing a privacy-first mindset To establish a robust data privacy compliance framework, banks, and financial services must prioritize privacy from the onset. This entails adopting a privacy-first mindset throughout all aspects of their operations. Embedding privacy principles into the organizational culture helps create a foundation for compliance, ensuring that data protection is a core value rather than an afterthought. Understand the regulatory landscape Compliance with data privacy regulations is an ongoing process that requires a deep understanding of the applicable legal landscape. Banks and financial services should invest in a comprehensive knowledge of regulations such as the General Data Protection Regulation (GDPR), California Consumer Privacy Act (CCPA), Digital Personal Data Protection (DPDP), and other relevant global and local regulations. This understanding helps organizations identify their obligations, assess risks, and implement necessary controls to ensure compliance. Ensuring compliance with regulatory requirements Data privacy compliance requirements vary based on specific regulations applicable to state, region or country. Organizations must adhere to these regulator requirements as its crucial to meeting legal obligations, maintaining trust and mitigating risks. Regularly Update Policies and Procedures: The data privacy landscape is constantly evolving, with new regulations and best practices emerging regularly. Banks and financial services should stay ahead of these developments to review and update their privacy policies and procedures accordingly. Regular audits and risk assessments should be conducted to identify gaps and ensure that the organization remains compliant with evolving requirements. Implement Data Discovery & Governance Frameworks: Effective data governance is a fundamental aspect of data privacy compliance. Banks and financial services should establish data governance frameworks with clear policies, procedures, and accountability mechanisms. This includes defining data ownership, identifying data across systems, implementing data classification, setting retention periods, and establishing secure data storage and disposal protocols. Regular audits and internal controls help ensure adherence to these policies and procedures. Streamline Consent Management: Transparency and consent are vital components of data privacy compliance. Banks and financial services should provide clear and easily understandable privacy notices to customers, outlining the types of data collected, the purposes of the processing, and any third-party sharing. Additionally, organizations should develop user-friendly consent mechanisms that enable individuals to make informed choices about their data. Fulfill User Rights and Data Subject Access Requests: All privacy regulations grant individuals various rights over their data, including the right to access, correct, delete, and restrict the sale of data. The fulfillment of data rights requires mechanisms such as customer self-service portals and automated workflows for data subject access requests. Conduct Privacy Impact Assessments (PIAs): Privacy Impact Assessments (PIAs) are essential tools for evaluating and mitigating privacy risks associated with data processing activities. Banks and financial services should regularly conduct PIAs to identify potential privacy concerns, assess the impact of data processing, and implement appropriate safeguards. PIAs enable organizations to proactively address privacy risks, demonstrate compliance, and enhance transparency in data processing practices. Prioritize Data Minimization and Purpose Limitation: Collecting and processing only the necessary personal data is a key principle of data privacy compliance. Banks and financial services should adopt data minimization strategies , limiting data collection to what is essential for legitimate business purposes. Furthermore, data should be processed only for specific, clearly defined purposes and not repurposed without obtaining appropriate consent or legal basis. By embracing data minimization and purpose limitation, organizations can reduce privacy risks and respect individuals' privacy preferences. Navigate Data Localization & Transfers: Data localization involves keeping data within the jurisdiction where it was collected. While this approach can help ensure data protection, it can also create challenges for businesses that operate in multiple countries. Implementing data localization practices ensures that customer data remains within the country's boundaries as well as adhering to cross-border data transfer requirements. Strengthen Security Measures: Protecting customer data from unauthorized access, breaches, and cyber threats is crucial. Banks and financial services should implement robust security measures, including encryption , access controls, intrusion detection systems, and regular security assessments. Ongoing staff training on cybersecurity awareness and best practices is essential to mitigate the risk of human error or negligence. Achieving privacy compliance with BigID and MongoDB Financial institutions need the ability to find, classify, inventory, and manage all of their sensitive data, regardless of whether it’s on-prem, hybrid-cloud, or cloud-based. Organizations must know where their data is located, replicated, and stored — as well as how it is collected and processed, it’s a momentous task — and requires addressing common challenges like siloed data, lack of visibility and accurate insight, and balancing legacy systems with cloud data. All while meeting a litany of compliance requirements. With a major shift towards geographically dispersed data, organizations must make sure they are aware of – and fully understand – the local and regional rules and requirements that apply to storing and managing data. Organizations without a strong handle on where their data is stored potentially risk millions of dollars in regulatory fines for mishandling data, loss of brand credibility, and distrust from customers. A modern approach relying on modern technologies like BigID & MongoDB helps to solve data privacy, data protection, and data governance challenges. BigID, the industry leader for data security, privacy, compliance, and governance, is trusted by some of the world's largest financial institutions to deliver fast and accurate data discovery, classification, and correlation across large and complex data sets. BigID utilizes MongoDB as the internal data store for the platform to help generate data insights at scale, automate advanced discovery & classification, and accommodate complex enterprise requirements. As technology partners, MongoDB’s document model and distributed architecture enable BigID to deliver a scalable and flexible data management platform for data privacy and protection. How BigID powered by MongoDB addresses privacy compliance challenges By taking a privacy-first approach to data and risk, organizations can address the challenges of continuous compliance, minimize security risks, proactively address data privacy programs, and strengthen data management initiatives. BigID, powered by MongoDB, helps organizations identify, manage, and monitor all personal and sensitive data activity to achieve compliance with several data privacy requirements. Organizations get: Deep Data Discovery: BigID helps organizations discover and inventory their critical data, including financial information. This enables organizations to understand what data they have and where it is located, which is an important first step in achieving compliance. Accurate Classification: With exact value matching, BigID graph based technology can identify and classify personal and sensitive data in any environment such as email, shared drives, databases, data lakes, and many more. Efficient Data Mapping: Automatically map PII and PI to identities, entities, and residencies to connect the dots in your data environments. Streamlined Data Lifecycle Management: Accurately find, classify, catalog, and tag your data and easily enforce governance & control – from retention to deletion. Fulfillment of Consent & Data Rights Request: Automate consent and data rights management with a privacy portal that includes a seamless U/X that manages data subject rights requests (DSAR). Centralize DSAR’s with automated access and deletion workflows to fulfill end-to-end data rights requests. Effective Privacy Impact Assessments (PIA/DPIA): Easily build seamless workflows and frameworks for privacy impact assessments (PIA) to estimate the risk associated with all data inventory. ML-based Data Access Management: For full compliance with specific requirements, BigID helps mitigate risk with significant open-access requirements to remediate file access violations on critical data across all data environments. Validated Data Transfers: Monitor cross-border data transfers and create policies to enforce data residency and localization requirements. Effective Remediation: BigID helps to define the remediation action related to critical data to provide audit records with integration to ticketing systems like Jira for seamless workflows. By adopting a privacy-first approach to data and risk, financial services organizations can tackle the challenges of continuous compliance, mitigate security risks, and enhance data management initiatives. BigID, powered by MongoDB, offers comprehensive solutions to help organizations identify, manage, and monitor personal and sensitive data activities, enabling them to achieve compliance with various data privacy requirements. Looking to learn more about how you can reduce risk, accelerate time to insight, and get data visibility and control across all your data - everywhere? Take a look at the below resources: Control your data for data security, compliance, privacy, and governance with BigID Data-driven privacy compliance and automation for new and emerging data privacy and protection regulation Protect your data with strong security defaults on the MongoDB developer data platform Manage and store data where you want with MongoDB MongoDB for Financial Services

June 21, 2023
Applied

Fueling Pricing Strategies with MongoDB and Databricks

Deploying real-time analytics solutions with the right tech stack can have transformative benefits. Retailers want to grow their brand name or improve customer experience with value based pricing, whilst remaining competitive and cost effective. Despite their ambition to become “data driven” operations, companies often fail in their endeavors, at the core of this is the struggle to do real-time analytics. We will explore the architecture in Figure 1 and discuss the advantages of integrating MongoDB Atlas and Databricks as a perfect pairing for retail pricing strategies using real time AI. The solution we’ll describe integrates concepts from Event Driven architecture in the data generation and ingestion side, real time analytics orchestration, machine learning and microservices. Let’s get started! Figure 1:  Overview of a dynamic pricing solution architecture Reduce friction with flexibility The pricing data complexity for a retailer with a diverse product line increases due to factors like seasonal sales, global expansion, and new product introductions. Tracking and analyzing historical prices and product offerings become more challenging as they change throughout the year. Analytics solutions built around event driven architectures try to explain what is happening in a specific system or solution based on any significant occurrence such as user actions, data updates, system alerts, or sensor readings. Deciding which occurrences are crucial to understand your customers and instrument your business model around that is when things start to become more intricate. Specially when trying to instrument your data models using traditional relational database management systems and its disadvantage when it comes to pairing that data structure with object oriented applications. The inability of a retailer to adapt it’s data model to the customer behavior quickly translates into friction and a weaker presence in the market, for example poor pricing strategies compared to competitors because they lack information of historic price points and how they vary between products. Figure 2:  An inflexible data model is a road block for innovation That friction is contagious throughout the whole value chain of an organization, affecting the semantic layer of business (a bridge between the technical data structures and business user understanding), generating data inconsistencies, and reducing time to insight. The capacity of your conceptual data model to adapt to an ever changing customer behavior helps reduce that friction significantly as its flexibility allows for a more intuitive data modeling of real world events. For your real time pricing strategies challenges, MongoDB Atlas document model with its embedding and extended reference capabilities becomes the perfect tool for the job, as it allows for faster feature development and stronger test driven growth and talent retention as a consequence. In combination with it’s high performance queries and horizontal scalability the solution becomes robust as it will handle the high throughput of clickstreams on your ecommerce applications and yet be able to respond real time data driven decision making features. Its ease of integration with other platforms thanks to strong API capabilities and drivers make it the perfect solution on top of which to build your business operational and intelligence layers as you’ll avoid vendor lock-in and data scientists can easily leverage AI frameworks to work with fresh data. It’s distributed by default principle, plus following best practices principles guarantee your operational data layer will handle the workload needed. As the AI algorithms analyze vast amounts of historical and real-time data to make pricing decisions, having a distributed platform like MongoDB enables efficient storage, processing, and retrieval of data across multiple nodes. From what? To why? The intelligence layer To unlock meaningful market growth and achieve it at scale your analytics need to evolve from just understanding what is happening by querying and analyzing historical data, to understand why the events measured with your operational data layer are happening and even further try to forecast them. For a pricing solution, retailers would need to gather historical pricing points data for their product lines and shape through ETL (Earn, Transform, Load) pipelines to feed machine learning algorithms. This process is often complicated and brittle using the traditional data warehousing approach, often incurring in data duplication making it difficult and costly to manage. Figure 3:  Reduced friction thanks to seamless integration of the different data layers The advantage of using MongoDB Atlas as your operational data layer solution, is that through its Aggregation Pipelines you can shape your data in any way you need and then through MongoDB App Services , you can instrument Triggers and Functions to simplify this process and then consume that data in Databricks by leveraging the MongoDB Atlas via Spark connector . Databricks provides you with a streamlined way of working with your models, by writing python code on hosted clusters notebooks. You can leverage its MLFlow integration to be able to register experiments which then can be turned into deployed models over an endpoint. So transforming your data and integrating your operational layer, through connectors and API calls as triggers and functions, with your intelligence layer for machine learning and AI, you can easily build a pricing solution that will be able to generate market growth for your organization from its core through a semantic layer acting as a bridge between the technical aspects of data storage and the business requirements of data analysis. Uncover new growth opportunities Designing a real time analytics solution with MongoDB Atlas and Databricks is not only the fastest way to unlock your team's capabilities to devise pricing strategies, it also sets the cornerstone to build automated rules for more complex solutions. Other ways of automating your retail application with AI driven insight could include: optimizing your marketing mix budget by each product price elasticity, adding another analytical layer of customer segmentation data to achieve targeted dynamic pricing, or optimizing your supply chain with sales forecasting in real time. By taking advantage of MongoDB Charts or the MongoDB BI Connector , you can fuel your business dashboards, making that semantic layer of the business model the central point for your teams alignment. Foundations for growth Modern ecommerce sites unleash the power of real time analytics and automation to create better experiences for customers and a more profound approach to customer analytics by unblocking the power of machine learning to discover trends in behavioral data, effectively turning companies into automated growth machines. If you want to discover how to build a simple dynamic pricing solution integrating MongoDB Atlas and Databricks make sure to read this guide.

June 20, 2023
Applied

Modernize Your Factory Operations: Build a Virtual Factory with MongoDB Atlas in 5 Simple Steps

Thank you to Karolina Ruiz Rojelj for her contributions to this post. Virtual factories are revolutionizing the manufacturing landscape. Coined as the "Revolution in factory planning" by BMW Group at NVIDIA, this cutting-edge approach is transforming the way companies like BMW and Hyundai operate, thanks to groundbreaking partnerships with technology companies such as NVIDIA and Unity. At the heart of this revolution lies the concept of virtual factories , computer-based replicas of real-world manufacturing facilities. These virtual factories accurately mimic the characteristics and intricacies of physical factories, making them a powerful tool for manufacturers to optimize their operations. By leveraging AI, they unlock a whole new world of possibilities, revolutionizing the manufacturing landscape, paving the way for improved productivity, cost savings, and innovation. In this blog we will explore the benefits of virtual factories and guide you through the process of building your own virtual factory using MongoDB Atlas. Let’s dive in! Unlocking digital transformation The digitalization of the manufacturing industry has given rise to the development of smart factories. These advanced factories incorporate IoT sensors into their machinery and equipment, allowing workers to gather data-driven insights on their manufacturing processes. However, the evolution does not stop at smart factories automating and optimizing physical production. The emergence of virtual factories introduces simulation capabilities and remote monitoring, leading to the creation of factory digital twins, as depicted in Figure 1. By bridging the concepts of smart and virtual factories, manufacturers can unlock greater levels of efficiency, productivity, flexibility, and innovation. Figure 1:  From smart factory to virtual factory Leveraging virtual factories in manufacturing organizations provides many benefits including: Optimization of production processes and identification of inefficiencies. This can lead to increased efficiency, reduced waste, and improved quality. Aiding quality control by contextualizing sensor data with the manufacturing process. This allows analysis of quality issues and implementation of necessary control measures while dealing with complex production processes. Simulating manufacturing processes and testing new products or ideas without the need for physical prototypes or real-world production facilities. This significantly reduces costs associated with research and development and minimizes the risk of product failure. However, setting up a virtual factory for complex manufacturing is difficult. Challenges include managing system overload, handling vast amounts of data from physical factories, and creating accurate visualizations. The virtual factory must also adapt to changes in the physical factory over time. Given these challenges, having a data platform that can contextualize all the data coming in from the physical factory and then feed that to the virtual factory and vice versa is crucial. And that is where MongoDB Atlas , our developer data platform, comes in, providing synchronization capabilities between physical and virtual worlds, enabling flexible data modeling and providing access to the data via a unified query interface as seen in Figure 2. Figure 2:  MongoDB Atlas as the Data Platform between physical and virtual Factories Now that we’ve discussed the benefits and the challenges of building virtual factories, let’s unpack how simple it is to build a virtual factory with MongoDB Atlas. How to build a virtual factory MongoDB Atlas 1. Define the business requirements The first step of the process is to define the business requirements for the virtual factory. Our team at MongoDB uses a smart factory model from Fischertechnik to demonstrate how easily MongoDB can be integrated to solve the digital transformation challenges of IIoT in manufacturing. This testbed serves as our foundational physical factory and the starting point of this project. Figure 3:  The smart factory testbed We defined our set of business requirements as the following: Implement a virtual run of the physical factory to identify layout and process optimizations. Provide real-time visibility of the physical factory conditions such as inventory for process improvements. This last requirement is critical; while standalone simulation models of factories can be useful, they typically do not take into account the real-time data from the physical factory. By connecting the physical and virtual factories, a digital twin can be created that takes into account the actual performance of the physical factory in real-time. This enables more accurate predictions of the factory's performance, which improves decision-making, process optimization, and enables remote monitoring and control, reducing downtime and improving response times. 2. Create a 3D model Based on the previous business requirements, we created a 3D-model of the factory in a widely used game engine, Unity . This virtual model can be visualized using a computer, tablet or any virtual reality headset. Figure 4:  3D-model of the smart factory in Unity Additionally, we also added four different buttons (red, white, blue, and “stop”) which enables users to submit production orders to the physical factory or stop the process altogether. 3. Connect the physical and virtual factories Once we created the 3D model, we connected the physical and virtual factories via MongoDB Atlas. Let’s start with our virtual factory software application. Regardless of where you deploy it, be it a headset or a tablet, you can use Realm by MongoDB to present data locally inside Unity and then synchronize it with MongoDB Atlas as the central data layer. Allowing us to have embedded databases where there's resource constrainment and MongoDB Atlas as a powerful and scalable cloud backend technology. And lastly, to ensure data synchronization and communication between these two components, we leveraged MongoDB Atlas Device Sync , providing bi-directional synchronization mechanism and network handling. Now that we have our virtual factory set-up, let’s have a look at our physical one. In a real manufacturing environment, many of the shopfloor connectivity systems can connect to MongoDB Atlas and for those who don't natively, it is very straightforward to build a connector. At the shopfloor layer you can have MongoDB set up so that you can analyze and visualize your data locally and set up materialized views. On the cloud layer, you can push data directly to MongoDB Atlas or use our Cluster-to-Cluster Sync functionality. A single IoT device, by itself, does not generate much data. But as the amount of devices grows, so does the volume of machine-generated data and therefore the complexity of the data storage architecture required to support it. The data storage layer is often one of the primary causes of performance problems as an application scales. A well-designed data storage architecture is a crucial component in any IoT platform. In our project, we have integrated AWS IoT Core to subscribe to MQTT messages from the physical factory. Once these messages are received and filtered, they are transmitted to MongoDB Atlas via an HTTP endpoint. The HTTP endpoint then triggers a function which stores the messages in the corresponding collection based on their source (e.g., messages from the camera are stored in the camera collection). With MongoDB Atlas, as your data grows you can archive it using our Atlas Online Archive functionality. Figure 5:  Virtual and physical factories data flow In figure 5, we can see everything we’ve put together so far, on the left hand side we have our virtual factory where users can place an order. The order information is stored inside Realm, synced with MongoDB Atlas using Atlas Device Sync and sent to the physical factory using Atlas Triggers . On the other hand, the physical factory sends out sensor data and event information about the physical movement of items within the factory. MongoDB Atlas provides the full data platform experience for connecting both physical and virtual worlds! 4. Data modeling Now that the connectivity has been established, let's look at modeling this data that is coming in. As you may know, any piece of data that can be represented in JSON can be natively stored in and easily retrieved from MongoDB. The MongoDB drivers take care of converting the data to BSON (binary JSON) and back when querying the database. Furthermore, you can use documents to model data in any way you need, whether it is key value pairs, time series data or event data. On the topic of time series data, MongoDB Time Series allows you to automatically store time series data in a highly optimized and compressed format, reducing customer storage footprint, as well as achieving greater query performance at scale. Figure 5:  Virtual and physical factories sample data It really is as simple as it looks, and the best part is that we are doing all of this inside MongoDB Atlas making a direct impact on developer productivity. 5. Enable computer vision for real-time inventory Once we have the data modeled and connectivity established, our last step is to run event-driven analytics on top of our developer data platform. We used computer vision and AI to analyze the inventory status in the physical factory and then pushed notifications to the virtual one. If the user tries to order a piece in the virtual factory that is not in stock, they will immediately get a notification from the physical factory. All this is made possible using MongoDB Atlas and its connectors to various AI platforms If you want to learn more, stay tuned for part 2 of this blog series where we’ll dive deep into the technical considerations of this last step. Conclusion By investing in a virtual factory, companies can optimize production processes, strengthen quality control, and perform cost-effective testing, ultimately improving efficiency and innovation in manufacturing operations. MongoDB, with its comprehensive features and functionality that cover the entire lifecycle of manufacturing data, is well-positioned to implement virtual factory capabilities for the manufacturing industry. These capabilities allow MongoDB to be in a unique position to fast-track the digital transformation journey of manufacturers. Learn more: MongoDB & IIoT: A 4-Step Data Integration Manufacturing at Scale: MongoDB & IIoT Manufacturing with MongoDB

June 20, 2023
Applied

Ulta Beauty Solves Seasonal Shopping

The holiday season can feel like a whirlwind to retailers. Between keeping up with sudden shifts in shopper preferences, supply chain nuances, and a massive increase in demand, the hardest part can be ensuring great customer experiences, both in store and online. This year, retailers have a unique challenge ahead, as those on Google Cloud have experienced more online traffic in the first half of 2022 than in all of 2019 . Retailers will need to get an early start to the 2023 shopping season since 50% of consumers plan to get a jump on their holiday shopping activity before the traditional start on Black Friday. Luckily, modern advancements in automation and infrastructure can help retailers survive seasonal spikes and stay innovative year-round. A platform for innovation Ulta Beauty's system overhaul proved to be a success when they implemented Google Kubernetes Engine (GKE) and enabled the development of cloud-native applications. Ulta Beauty was able to take advantage of the new strategic change to swiftly fix bugs, test new offerings, and provide customers with far better experiences. Thanks to this transformation and using GKE, their developers are now able to launch new products and services faster, and create great customer interactions faster. Now Ulta Beauty's guests have dynamic and personal connections with beauty and wellness, with their preferences taken into account. Despite this, the company had to overcome some difficulties. As Sethu Madhav Vure, IT Architect, Ulta Beauty, explains, "Microservices are not a silver bullet. For us, the struggle was breaking up a monolithic environment into multiple applications while keeping existing services functional and preparing for the future." Ulta Beauty sought to simplify and scale through a domain-driven design approach. Identifying and grouping similarly structured operations enabled them to create a modern architecture. To match their needs for dynamic scaling, MongoDB Atlas was selected and leveraged with Google Cloud integrations, resulting in a quick proof of concept. Thanks to comprehensive resource allocation, Vure remarked, "The free tier of MongoDB Atlas allowed us to prove the value of the technology before we invested in it." The innovative partnership between MongoDB Atlas and Google Cloud has enabled Ulta Beauty to maximize efficiency and take a rapid, iterative approach to their newest projects. It allows them to better manage their expansive data, and to deploy and scale offerings quickly and successfully, as seen in their recent unplanned traffic surge that took them less than an hour to manage. Equipped with the dynamic scalability of GKE and the on-demand functionality of MongoDB Atlas, Ulta Beauty can face any challenge with confidence. Seasonal prep Ulta Beauty has drastically improved its technical infrastructure to meet the demands of the holiday season. With just 20 GKE pods, the company is now able to scale up to 2,400 transactions per second! Partnering with Google Cloud, Ulta Beauty leveraged event-based integrations and Cloud Pub/Sub middleware on top of MongoDB Atlas integrations, resulting in an efficient process that maximizes the power of their platform. By optimizing their technology partner stack, Ulta Beauty was able to make a dramatic shift in their IT culture, allowing them to trial new solutions faster and with the full support of leadership. The result? They were able to handle increased traffic during the holiday shopping season with ease, delivering the customer service they promised, free from the worry of outages. As Vure puts it, "We are now better prepared for a stress-free holiday season, enabling us to focus on creating even more great service for our customers." Check out MongoDB on Google Cloud Marketplace to learn more about what these partners can do for your business.

June 15, 2023
Applied

Dissecting Open Banking with MongoDB: Technical Challenges and Solutions

Thank you to Ainhoa Múgica for her contributions to this post. Unleashing a disruptive wave in the banking industry, open banking (or open finance), as the term indicates, has compelled financial institutions (banks, insurers, fintechs, corporates, and even government bodies) to embrace a new era of transparency, collaboration, and innovation. This paradigm shift requires banks to openly share customer data with third-party providers (TPPs), driving enhanced customer experiences and fostering the development of innovative fintech solutions by combining ‘best-of-breed’ products and services. As of 2020, 24.7 million individuals worldwide used open banking services, a number that is forecast to reach 132.2 million by 2024. This rising trend fuels competition, spurs innovation, and fosters partnerships between traditional banks and agile fintech companies. In this transformative landscape, MongoDB, a leading developer data platform, plays a vital role in supporting open banking by providing a secure, scalable, and flexible infrastructure for managing and protecting shared customer data. By harnessing the power of MongoDB's technology, financial institutions can lower costs, improve customer experiences, and mitigate the potential risks associated with the widespread sharing of customer data through strict regulatory compliance. Figure 1: An Example Open Banking Architecture The essence of open banking/finance is about leveraging common data exchange protocols to share financial data and services with 3rd parties. In this blog, we will dive into the technical challenges and solutions of open banking from a data and data services perspective and explore how MongoDB empowers financial institutions to overcome these obstacles and unlock the full potential of this open ecosystem. Dynamic environments and standards As open banking standards continue to evolve, financial institutions must remain adaptable to meet changing regulations and industry demands. Traditional relational databases often struggle to keep pace with the dynamic requirements of open banking due to their rigid schemas that are difficult to change and manage over time. In countries without standardized open banking frameworks, banks and third-party providers face the challenge of developing multiple versions of APIs to integrate with different institutions, creating complexity and hindering interoperability. Fortunately, open banking standards or guidelines (eg. Europe, Singapore, Indonesia, Hong Kong, Australia, etc) have generally required or recommended that the open APIs be RESTful and support JSON data format, which creates a basis for common data exchange. MongoDB addresses these challenges by offering a flexible developer data platform that natively supports JSON data format, simplifies data modeling, and enables flexible schema changes for developers. With features like the MongoDB Data API and GraphQL API , developers can reduce development and maintenance efforts by easily exposing data in a low-code manner. The Stable API feature ensures compatibility during database upgrades, preventing code breaks and providing a seamless transition. Additionally, MongoDB provides productivity-boosting features like full-text search , data visualization , data federation , mobile database synchronization , and other app services enabling developers to accelerate time-to-market. With MongoDB's capabilities, financial institutions and third-party providers can navigate the changing open banking landscape more effectively, foster collaboration, and deliver innovative solutions to customers. An example of a client who leverages MongoDB’s native JSON data management and flexibility is Natwest. Natwest is a major retail and commercial bank in the United Kingdom based in London, England. The bank has moved from zero to 900 million API calls per month within years, as open banking uptake grows and is expected to grow 10 times in coming years. At a MongoDB event on 15 Nov 2022, Jonathan Haggarty, Natwest’s Head of “Bank of APIs” Technology – an API ecosystem that brings the retail bank’s services to partners – shared in his presentation titled Driving Customer Value using API Data that Natwest’s growing API ecosystem lets it “push a bunch of JSON data into MongoDB [which makes it] “easy to go from simple to quite complex information" and also makes it easier to obfuscate user details through data masking for customer privacy. Natwest is enabled to surface customer data insights for partners via its API ecosystem, for example “where customers are on the e-commerce spectrum”, the “best time [for retailers] to push discounts” as well insights on “most valuable customers” – with data being used for problem-solving; analytics and insight; and reporting. Performance In the dynamic landscape of open banking, meeting the unpredictable demands for performance, scalability, and availability is crucial. The efficiency of applications and the overall customer experience heavily rely on the responsiveness of APIs. However, building an open banking platform becomes intricate when accommodating third-party providers with undisclosed business and technical requirements. Without careful management, this can lead to unforeseen performance issues and increased costs. Open banking demands high performance of the APIs under all kinds of workload volumes. OBIE recommends an average TTLB (time to last byte) of 750 ms per endpoint response for all payment invitations (except file payments) and account information APIs. Compliance with regulatory service level agreements (SLAs) in certain jurisdictions further adds to the complexity. Legacy architectures and databases often struggle to meet these demanding criteria, necessitating extensive changes to ensure scalability and optimal performance. That's where MongoDB comes into play. MongoDB is purpose-built to deliver exceptional performance with its WiredTiger storage engine and its compression capabilities. Additionally, MongoDB Atlas improves the performance following its intelligent index and schema suggestions, automatic data tiering, and workload isolation for analytics. One prime illustration of its capabilities is demonstrated by Temenos, a renowned financial services application provider, achieving remarkable transaction volume processing performance and efficiency by leveraging MongoDB Atlas. They recently ran a benchmark with MongoDB Atlas and Microsoft Azure and successfully processed an astounding 200 million embedded finance loans and 100 million retail accounts at a record-breaking 150,000 transactions per second . This showcases the power and scalability of MongoDB with unparalleled performance to empower financial institutions to effectively tackle the challenges posed by open banking. MongoDB ensures outstanding performance, scalability, and availability to meet the ever-evolving demands of the industry. Scalability Building a platform to serve TPPs, who may not disclose their business usages and technical/performance requirements, can introduce unpredictable performance and cost issues if not managed carefully. For instance, a bank in Singapore faced an issue where their Open APIs experienced peak loads and crashes every Wednesday. After investigation, they discovered that one of the TPPs ran a promotional campaign every Wednesday, resulting in a surge of API calls that overwhelmed the bank's infrastructure. A scalable solution that can perform under unpredictable workloads is critical, besides meeting the performance requirements of a certain known volume of transactions. MongoDB's flexible architecture and scalability features address these concerns effectively. With its distributed document-based data model, MongoDB allows for seamless scaling both vertically and horizontally. By leveraging sharding , data can be distributed across multiple nodes, ensuring efficient resource utilization and enabling the system to handle high transaction volumes without compromising performance. MongoDB's auto-sharding capability enables dynamic scaling as the workload grows, providing financial institutions with the flexibility to adapt to changing demands and ensuring a smooth and scalable open banking infrastructure. Availability In the realm of open banking, availability becomes a critical challenge. With increased reliance on banking services by third-party providers (TPPs), ensuring consistent availability becomes more complex. Previously, banks could bring down certain services during off-peak hours for maintenance. However, with TPPs offering 24x7 experiences, any downtime is unacceptable. This places greater pressure on banks to maintain constant availability for Open API services, even during planned maintenance windows or unforeseen events. MongoDB Atlas, the fully managed global cloud database service, addresses these availability challenges effectively. With its multi-node cluster and multi-cloud DBaaS capabilities, MongoDB Atlas ensures high availability and fault tolerance. It offers the flexibility to run on multiple leading cloud providers, allowing banks to minimize concentration risk and achieve higher availability through a distributed cluster across different cloud platforms. The robust replication and failover mechanisms provided by MongoDB Atlas guarantee uninterrupted service and enable financial institutions to provide reliable and always-available open banking APIs to their customers and TPPs. Security and privacy Data security and consent management are paramount concerns for banks participating in open banking. The exposure of authentication and authorization mechanisms to third-party providers raises security concerns and introduces technical complexities regarding data protection. Banks require fine-grained access control and encryption mechanisms to safeguard shared data, including managing data-sharing consent at a granular level. Furthermore, banks must navigate the landscape of data privacy laws like the General Data Protection Regulation (GDPR), which impose strict requirements distinct from traditional banking regulations. MongoDB offers a range of solutions to address these security and privacy challenges effectively. Queryable Encryption provides a mechanism for managing encrypted data within MongoDB, ensuring sensitive information remains secure even when shared with third-party providers. MongoDB's comprehensive encryption features cover data-at-rest and data-in-transit, protecting data throughout its lifecycle. MongoDB's flexible schema allows financial institutions to capture diverse data requirements for managing data sharing consent and unify user consent from different countries into a single data store, simplifying compliance with complex data privacy laws. Additionally, MongoDB's geo-sharding capabilities enable compliance with data residency laws by ensuring relevant data and consent information remain in the closest cloud data center while providing optimal response times for accessing data. To enhance data privacy further, MongoDB offers field-level encryption techniques, enabling symmetric encryption at the field level to protect sensitive data (e.g., personally identifiable information) even when shared with TPPs. The random encryption of fields adds an additional layer of security and enables query operations on the encrypted data. MongoDB's Queryable Encryption technique further strengthens security and defends against cryptanalysis, ensuring that customer data remains protected and confidential within the open banking ecosystem. Activity monitoring With numerous APIs offered by banks in the open banking ecosystem, activity monitoring and troubleshooting become critical aspects of maintaining a robust and secure infrastructure. MongoDB simplifies activity monitoring through its monitoring tools and auditing capabilities. Administrators and users can track system activity at a granular level, monitoring database system and application events. MongoDB Atlas has Administration APIs , which one can use to programmatically manage the Atlas service. For example, one can use the Atlas Administration API to create database deployments, add users to those deployments, monitor those deployments, and more. These APIs can help with the automation of CI/CD pipelines as well as monitoring the activities on the data platform enabling developers and administrators to be freed of this mundane effort and focus on generating more business value. Performance monitoring tools, including the performance advisor, help gauge and optimize system performance, ensuring that APIs deliver exceptional user experiences. Figure 2: Activity Monitoring on MongoDB Atlas MongoDB Atlas Charts , an integrated feature of MongoDB Atlas, offers analytics and visualization capabilities. Financial institutions can create business intelligence dashboards using MongoDB Atlas Charts. This eliminates the need for expensive licensing associated with traditional business intelligence tools, making it cost-effective as more TPPs utilize the APIs. With MongoDB Atlas Charts, financial institutions can offer comprehensive business telemetry data to TPPs, such as the number of insurance quotations, policy transactions, API call volumes, and performance metrics. These insights empower financial institutions to make data-driven decisions, improve operational efficiency, and optimize the customer experience in the open banking ecosystem. Figure 3: Atlas Charts Sample Dashboard Real-Timeliness Open banking introduces new challenges for financial institutions as they strive to serve and scale amidst unpredictable workloads from TPPs. While static content poses fewer difficulties, APIs requiring real-time updates or continuous streaming, such as dynamic account balances or ESG-adjusted credit scores, demand capabilities for near-real-time data delivery. To enable applications to immediately react to real-time changes or changes as they occur, organizations can leverage MongoDB Change Streams that are based on its aggregation framework to react to data changes in a single collection, a database, or even an entire deployment. This capability further enhances MongoDB’s real-time data and event processing and analytics capabilities. MongoDB offers multiple mechanisms to support data streaming, including a Kafka connector for event-driven architecture and a Spark connector for streaming with Spark. These solutions empower financial institutions to meet the real-time data needs of their open banking partners effectively, enabling seamless integration and real-time data delivery for enhanced customer experiences. Conclusion MongoDB's technical capabilities position it as a key enabler for financial institutions embarking on their open banking journey. From managing dynamic environments and accommodating unpredictable workloads to ensuring scalability, availability, security, and privacy, MongoDB provides a comprehensive set of tools and features to address the challenges of open banking effectively. With MongoDB as the underlying infrastructure, financial institutions can navigate the ever-evolving open banking landscape with confidence, delivering innovative solutions, and driving the future of banking. Embracing MongoDB empowers financial institutions to unlock the full potential of open banking and provide exceptional customer experiences in this era of collaboration and digital transformation. If you would like to learn more about how you can leverage MongoDB for your open banking infrastructure, take a look at the below resources: Open banking panel discussion: future-proof your bank in a world of changing data and API standards with MongoDB, Celent, Icon Solutions, and AWS How a data mesh facilitates open banking Financial services hub

June 6, 2023
Applied

Empower Modern App Developers with Document Databases

Across industries, business success depends on a company’s ability to deliver new digital experiences through software. The speed at which a company can develop and deploy a new application with innovative features is a direct lever on business outcomes. Given the vital role developers play in the success of your business, it stands to reason that equipping them with the tools to maximize their productivity is in your best interest. Unfortunately, many organizations are unaware of the tax they’re placing on their development teams by using a relational database. While the relational database has been a bedrock for data-driven applications for 50 years, it was developed in an era before the internet and is a poor fit as the foundation of today’s web and mobile applications. Document databases, which have emerged over the past decade, have cemented themselves as the most popular and widely used alternative to the tabular model found in traditional relational databases. Document databases have become so powerful that even relational databases are trying to emulate them. Built around JavaScript Object Notation (JSON)–like documents, document databases are intuitive for developers to use. Instead of the rigid row-and-column structure of the relational model, document databases map documents directly to objects in code, which is how coders naturally think of and work with data. Let’s break down the key advantages to document databases in building modern applications. We’ll see why the document model’s flexibility eliminates the complex intergroup dependencies that have traditionally slowed developers down. The limitations of the relational database model Relational databases add complexity to a developer’s workload, severely hampering the velocity of work. The rigid row-and-column structure creates a mismatch between the way developers think of code and data, and how they need to store it. Additionally, while the relational model was fine in an age when most applications used a small pre-set of attributes such as last names, ZIP codes, and state abbreviations, the majority of data collected by organizations today is rich in structure. We have given names and sometimes preferred names. We have unique attributes that are relevant to only some of us: For example, people with PhDs have dissertation topics, sports lovers have favorite sports, and our families come in every conceivable shape and size. This richly structured data reflects how we actually think about the real world, and it’s very difficult to flatten, store, analyze, or query using rows and columns. With relational databases, developers can feel stuck in quicksand with changes to their applications requiring them to carefully collaborate with experts like database administrators (DBAs) who help them translate their schemas and queries to underlying relational data models to ensure that indexing strategies are appropriately employed. The layer of indirection increases cognitive load, is hard to reason about, and slows everything down. More than half of application changes require database schema modifications. Those database modifications take longer to complete than the application changes they are designed to support. You can quickly see why these complicated efforts severely slow the delivery of new software features into production. Enabling development of modern apps with document databases With the birth of the internet and the proliferation of mobile and web apps, developers’ roles evolved. The emergence of robust development frameworks, which abstracted away underlying complexity, and the rise of DevOps led organizations to consolidate developer functions. The new generation of full-stack developers wanted databases that better addressed their applications’ requirements and their ways of working with data. The founders of MongoDB recognized a need for a modern database solution while at the adtech giant DoubleClick in 2007. They still were unable to scale to the 400,000 transactions per second the business required due to the constraints of the relational model. These challenges inspired them to create a new, modern, general-purpose database. This database could address the shortcomings of the relational data model and offer a solution that developers actually wanted to use. The result was a horizontally scalable, document-based NoSQL database called MongoDB. The document database model in general, and MongoDB more specifically, addresses the limitations of relational databases in several notable ways: Intuitive data model: The documents at the center of document databases have a universal data format. JSON is a lightweight, language-independent, and human-readable format that has become a widely used standard for storing and exchanging data. These documents map directly to data structures in popular programming languages so there is no need for the additional mapping layer often used with relational databases. Because data that is accessed together is stored together, there is less code to write; developers don’t need to decompose data across tables or run joins. Flexible schema: These JSON-like documents are flexible. Each document can have its own fields, and there’s no need to pre-define the schema in the database. It can be modified at any time. That flexibility enhances developer agility. Meeting user expectations while simplifying application architectures The most innovative applications we use in our daily lives — think Netflix and Instagram — have raised user expectations for what every application should be. Today we expect applications to be: Highly responsive Able to deliver relevant information Optimized for mobile devices Secure Powered by real-time insights Continuously improved Meeting those expectations can be extremely challenging, especially for developers using relational databases. A typical data infrastructure built around a legacy relational database can trap your development team in overly complex and siloed architectures. Document databases, on the other hand, can simplify application architectures. Documents are a superset of all other data models, so developers can store and work with a variety of data types. Development teams can accommodate most of their use cases in a single data model and database The document data model can help your developers overcome the limitations of the relational model while improving their productivity and velocity. Allowing them to minimize the undifferentiated work of maintaining their infrastructure and to focus on meeting demanding user expectations. As a result, they can deliver better, more innovative applications faster than before. Click here to read the original article published on The New Stack.

June 5, 2023
Applied

How Edenlab Built a High-Load, Low-Code FHIR Server to Deliver Healthcare for 40 Million Plus Patients

The Kodjin FHIR server has speed and scale in its DNA. Edenlab, the Ukrainian company behind Kodjin , built our original FHIR solution to digitize and service the entire Ukrainian national health system. The learnings and technologies from that project informed our development of the Kodjin FHIR server. At Edenlab, we have always been driven by our passion for building solutions that excel in speed and scale. With Kodjin, we have embraced a modern tech stack to deliver unparalleled performance that can handle the demands of large-scale healthcare systems, providing efficient data management and seamless interoperability. Eugene Yesakov, Solution Architect, Author of Kodjin Built for speed and scale While most healthcare projects involve handling large volumes of data, including patient records, medical images, and sensor data, the Kodjin FHIR server is based on a system developed to handle tens of millions of patient records and thousands of requests per second, to ensure timely access and efficient decision-making for a population of over 40 million people. And all of this information had to be processed and exchanged in real-time or near real-time, without delays or bottlenecks. This article will explore some of the architectural decisions the Edenlab team took when building Kodjin, specifically the role MongoDB played in enhancing performance and ensuring scalability. We will examine the benefits of leveraging MongoDB's scalability, flexibility, and robust querying capabilities, as well as its ability to handle the increasing velocity and volume of healthcare data without compromising performance. About Kodjin FHIR server Kodjin is an ONC-certified and HIPAA-compliant FHIR Server that offers hassle-free healthcare data management. It has been designed to meet the growing demands of healthcare projects, allowing for the efficient handling of increasing data volumes and concurrent requests. Its architecture, built on a horizontally scalable microservices approach, utilizes cutting-edge technologies such as the Rust programming language, MongoDB, ElasticSearch, Kafka, and Kubernetes. These technologies enable Kodjin to provide users with a low-code approach while harnessing the full potential of the FHIR specification. A deeper dive into the architecture approach - the role of MongoDB in Kodjin When deciding on the technology stack for the Kodjin FHIR Server, the Edenlab team knew that a document database would be required to serve as a transactional data store. In an FHIR Server, a transactional data store ensures that data operations occur in an atomic and consistent manner, allowing for the integrity and reliability of the data. Document databases are well-suited for this purpose as they provide a flexible schema and allow for storing complex data structures, such as those found in FHIR data. FHIR resources are represented in a hierarchical structure and can be quite intricate, with nested elements and relationships. Document databases, like MongoDB, excel at handling such complex and hierarchical data structures, making them an ideal choice for storing FHIR data. In addition to supporting document storage, the Edenlab team needed the chosen database to provide transactional capabilities for FHIR data operations. FHIR transactions, which encompass a set of related data operations that should either succeed or fail as a whole, are essential for maintaining data consistency and integrity. They can also be used to roll back changes if any part of the transaction fails. MongoDB provides support for multi-document transactions , enabling atomic operations across multiple documents within a single transaction. This aligns well with the transactional requirements of FHIR data and ensures data consistency in Kodjin. Implementation of GridFS as a storage for the terminologies in Terminology service Terminology service plays a vital role in FHIR projects, requiring a reliable and efficient storage solution for terminologies used. Kodjin employs GridFS , a file system within MongoDB designed for storing large files, which makes it ideal to handle terminologies. GridFS offers a convenient way to store and manage terminology files, ensuring easy accessibility and seamless integration within the FHIR ecosystem. By utilizing MongoDB's GridFS, Kodjin ensures efficient storage and retrieval of terminologies, enhancing the overall functionality of the terminology service. Kodjin FHIR server performance To evaluate the efficiency and responsiveness of the Kodjin FHIR server in various scenarios we conducted multiple performance tests using Locust, an open-source load testing tool. One of the performance metrics measured was the retrieval of resources by their unique ids using the GET by ID operation. Kodjin with MongoDB achieved a performance of 1721.8 requests per second (RPS) for this operation. This indicates that the server can efficiently retrieve specific resources, enabling quick access to desired data. The search operation, which involves querying ElasticSearch to obtain the ids of the searched resources and retrieving them from MongoDB, exhibited a performance of 1896.4 RPS. This highlights the effectiveness of polyglot persistence in Kodjin, leveraging ElasticSearch for fast and efficient search queries and MongoDB for resource retrieval. The system demonstrated its ability to process search queries and retrieve relevant results promptly. In terms of resource creation, Kodjin with MongoDB showed a performance of 1405.6 RPS for POST resource operations. This signifies that the system can effectively handle numerous resource-creation requests. The efficient processing and insertion of new resources into the MongoDB database ensure seamless data persistence and scalability. Overall, the performance tests confirm that Kodjin with MongoDB delivers efficient and responsive performance across various FHIR operations. The high RPS values obtained demonstrate the system's capability to handle significant workloads and provide timely access to resources through GET by ID, search, and POST operations. Conclusion Kodjin leverages a modern tech stack including Rust, Kafka, and Kubernetes to deliver the highest levels of performance. At the heart of Kodjin is MongoDB, which serves as a transactional data store. MongoDB's capabilities, such as multi-document transactions and flexible schema, ensure the integrity and consistency of FHIR data operations. The utilization of GridFS within MongoDB ensures efficient storage and retrieval of terminologies, optimizing the functionality of the Terminology service. To experience the power and potential of the Kodjin FHIR server firsthand, we invite you to contact the Edenlab team for a demo. For more information On MongoDB’s work in healthcare, and to understand why the world’s largest healthcare companies trust MongoDB, read our whitepaper on radical interoperability .

May 31, 2023
Applied

Accelerating to T+1 - Have You Got the Speed and Agility Required to Meet the Deadline?

Thank you to Ainhoa Múgica and Karolina Ruiz Rogelj for their contributions to this post. On May 28, 2024, the Securities and Exchange Commission (SEC) will implement a move to a T+1 settlement for standard securities trades , shortening the settlement period from 2 business days after the trade date to one business day. The change aims to address market volatility and reduce credit and settlement risk. The shortened T+1 settlement cycle can potentially decrease market risks, but most firms' current back-office operations cannot handle this change. This is due to several challenges with existing systems, including: Manual processes will be under pressure due to the shortened settlement cycle Batch data processing will not be feasible To prepare for T+1, firms should take urgent action to address these challenges: Automate manual processes to streamline them and improve operational efficiency Event-based real-time processing should replace batch processing for faster settlement In this blog, we will explore how MongoDB can be leveraged to accelerate manual process automation and replace batch processes to enable faster settlement. What is a T+1 and T+2 settlement? T+1 settlement refers to the practice of settling transactions executed before 4:30pm on the following trading day. For example, if a transaction is executed on Monday before 4:30 pm, the settlement will occur on Tuesday. This settlement process involves the transfer of securities and/or funds from the seller's account to the buyer's account. This contrasts with the T+2 settlement, where trades are settled two trading days after the trade date. According to SEC Chair Gary Gensler , “T+1 is designed to benefit investors and reduce the credit, market, and liquidity risks in securities transactions faced by market participants.” Overcoming T+1 transition challenges with MongoDB: Two unique solutions 1. The multi-cloud developer data platform accelerates manual process automation Legacy settlement systems may involve manual intervention for various tasks, including manual matching of trades, manual input of settlement instructions, allocation emails to brokers, reconciliation of trade and settlement details, and manual processing of paper-based documents. These manual processes can be time-consuming and prone to errors. MongoDB (Figure 1 below) can help accelerate developer productivity in several ways: Easy to use: MongoDB is designed to be easy to use, which can reduce the learning curve for developers who are new to the database. Flexible data model: Allows developers to store data in a way that makes sense for their application. This can help accelerate development by reducing the need for complex data transformations or ORM mapping. Scalability: MongoDB is highly scalable , which means it can handle large volumes of trade data and support high levels of concurrency. Rich query language: Allows developers to perform complex queries without writing much code. MongoDB's Apache Lucene-based search can also help screen large volumes of data against sanctions and watch lists in real-time. Figure 1: MongoDB's developer data platform Discover the developer productivity calculator . Developers spend 42% of their work week on maintenance and technical debt. How much does this cost your organization? Calculate how much you can save by working with MongoDB. 2. An operational trade store to replace slow batch processing Back-office technology teams face numerous challenges when consolidating transaction data due to the complexity of legacy batch ETL and integration jobs. Legacy databases have long been the industry standard but are not optimal for post-trade management due to limitations such as rigid schema, difficulty in horizontal scaling, and slow performance. For T+1 settlement, it is crucial to have real-time availability of consolidated positions across assets, geographies, and business lines. It is important to note that the end of the batch cycle will not meet this requirement. As a solution, MongoDB customers use an operational trade data store (ODS) to overcome these challenges for real-time data sharing. By using an ODS, financial firms can improve their operational efficiency by consolidating transaction data in real-time. This allows them to streamline their back-office operations, reduce the complexity of ETL and integration processes, and avoid the limitations of relational databases. As a result, firms can make faster, more informed decisions and gain a competitive edge in the market. Using MongoDB (Figure 2 below), trade desk data is copied into an ODS in real-time through change data capture (CDC), creating a centralized trade store that acts as a live source for downstream trade settlement and compliance systems. This enables faster settlement times, improves data quality and accuracy, and supports full transactionality. As the ODS evolves, it becomes a "system of record/golden source" for many back office and middle office applications, and powers AI/ML-based real-time fraud prevention applications and settlement risk failure systems. Figure 2: Centralized Trade Data Store (ODS) Managing trade settlement risk failure is critical in driving efficiency across the entire securities market ecosystem. Luckily, MongoDB integration capabilities (Figure 3 below) with modern AI and ML platforms enable banks to develop AI/ML models that make managing potential trade settlement fails much more efficient from a cost, time, and quality perspective. Additionally, predictive analytics allow firms to project availability and demand and optimize inventories for lending and borrowing. Figure 3: Event-driven application for real time monitoring Summary Financial institutions face significant challenges in reducing settlement duration from two business days (T+2) to one (T+1), particularly when it comes to addressing the existing back-office issues. However, it's crucial for them to achieve this goal within a year as required by the SEC. This blog highlights how MongoDB's developer data platform can help financial institutions automate manual processes and adopt a best practice approach to replace batch processes with a real-time data store repository (ODS). With the help of MongoDB's developer data platform and best practices, financial institutions can achieve operational excellence and meet the SEC's T+1 settlement deadline on May 28, 2024. In the event of T+0 settlement cycles becoming a reality, institutions with the most flexible data platform will be better equipped to adjust. Top banks in the industry are already adopting MongoDB's developer data platform to modernize their infrastructure, leading to reduced time-to-market, lower total cost of ownership, and improved developer productivity. Looking to learn more about how you can modernize or what MongoDB can do for you? Zero downtime migrations using MongoDB’s flexible schema Accelerate your digital transformation with these 5 Phases of Banking Modernization Reduce time-to-market for your customer lifecycle management applications MongoDB’s financial services hub

May 25, 2023
Applied

4 Ways MongoDB Solves Healthcare's Interoperability Puzzle

Picture this: You're on a road trip, driving across the country, taking in the beautiful scenery, and enjoying the freedom of the open road. But suddenly, the journey comes to a screeching halt as you fall seriously ill and need emergency surgery. The local hospital rushes you into the operating room, but how will they know what medications you're allergic to, or what conditions you've been treated for in the past? Figure 1: Before and after interoperability In a perfect world, the hospital staff would have access to all of your medical records, seamlessly integrated into one interoperable electronic health record (EHR) system. This would enable them to quickly and accurately treat you as seen in Figure 1. Unfortunately, the reality is that data is often siloed, fragmented, and difficult to access, making it nearly impossible for healthcare providers to get a complete picture of their patients' health. That’s where interoperability comes in, enabling seamless integration of data from different sources and formats, allowing healthcare providers with easy access to the information they need, even between different health providers. And at the heart of solving the interoperability challenge is MongoDB, the ideal solution for building a truly interoperable data repository. In this blog post, we'll explore four ways why MongoDB stands out from all others in the interoperability software space. We'll show you how our unique capabilities make us the fundamental missing piece in the interoperability puzzle for healthcare. Let’s get started! 1. Document flexibility MongoDB's document data model is perfect for managing healthcare data. It allows you to work with the data in JSON format, eliminating the need to flatten or transform it into a string. This simplifies the implementation of common interoperability standards for clinical and terminology data, such as HL7 FHIR and openEHR, as well as SNOMED and LOINC - because all of these standards also support JSON. The document model also supports nested and hierarchical data structures, making it easier to represent complex clinical data with varying levels of detail and granularity. MongoDB's document model also provides flexibility in managing healthcare data, allowing for dynamic and self-describing schemas. With no need to pre-define the schema, fields can vary from document to document and can be modified at any time without requiring disruptive schema migrations. This makes it easy for healthcare providers to add or update information to clinical documents, such as when new interoperability standards are released, ensuring that healthcare data is kept accurate and up-to-date without requiring database reconfiguration or downtime. 2. Scalability Dealing with large healthcare datasets can be challenging for traditional relational database systems, but MongoDB's horizontal scaling feature offers a solution. With horizontal scaling, healthcare providers can easily distribute their data across multiple servers and cloud providers (AWS, GCP, and Azure), resulting in increased processing power and faster query times. It also results in more cost-efficient storage as growing vertically is more expensive than growing horizontally. This feature allows healthcare providers to scale their systems seamlessly as their data volumes grow while maintaining performance and reliability. While MongoDB’s reliability is ensured through its replication architecture, where each database replica set consists of three nodes that provide fault tolerance and automatic failover in the event of node failure. Horizontal scaling also improves reliability by adding more servers or nodes to the system, reducing the risk of a single point of failure. 3. Performance When it comes to healthcare data, query performance can make all the difference in delivering timely and accurate care. And that’s another aspect where MongoDB shines. MongoDB holds data in a format that is optimized for storage and retrieval, allowing it to quickly and efficiently read and write data. MongoDB’s advanced querying capabilities, backed by compound and wildcard indexes, make it a standout solution for healthcare applications. MongoDB Atlas’ Search, using Apache Lucene indexing, also enables efficient querying across vast data sets, handling complex queries with multiple fields. This is especially useful for Clinical Data Repositories (CDRs), which permit almost unlimited querying flexibility. Atlas Search indexing also allows for advanced search features enabling medical professionals to quickly and accurately access the information they need from any device. 4. Security Figure 2: Fine-grained access control The security of sensitive clinical data is paramount in the healthcare industry. That’s why MongoDB provides an array of robust security features, including fine-grained access control and auditing as seen in Figure 2. With Client-Side-Field-Level Encryption (CS-FLE) and Queryable Encryption, MongoDB is the only data platform that allows the processing of randomly encrypted patient data, providing the highest level of data security, with minimal impact on performance. Additionally, MongoDB Atlas supports VPC peering and private links that permit secure connections to healthcare applications, wherever they are hosted. By implementing strong security measures from the start, organizations can ensure privacy by design. Partner ecosystem MongoDB is the only non-relational database and modern data platform that directly collaborates with clinical data repository (CDR) vendors like Smile, Exafluence, Better, Firely, and others. While some vendors offer MongoDB as an alternative to a relational database, others have built their solutions exclusively on MongoDB, one for example is Kodjin FHIR server. MongoDB has extended its capabilities to integrate with AWS FHIR Works, enabling healthcare providers and payers to deploy a FHIR server with MongoDB Atlas through the AWS Marketplace. With MongoDB's unique approach to data storage and retrieval and its ability to work with CDR vendors, millions of patients worldwide are already benefiting from its use. Beyond interoperability with MongoDB Access to complete medical records is often limited by data silos and fragmentation, leaving healthcare providers with an incomplete picture of their patients' health. That's where MongoDB's interoperability solution comes in as the missing puzzle piece the healthcare industry needs. With MongoDB's unmatched document flexibility, scalability, performance, and security features, healthcare providers can access accurate and up-to-date patient information in real-time. But MongoDB's solution goes beyond that. Radical interoperability with MongoDB means that healthcare providers own the data layer and are thus able to leverage any usages from the stored data, and connect to any existing applications or APIs. They're free to work with any healthcare data standard, including custom schemas, and leverage the data for use cases beyond storage and interoperability. The future of healthcare is here, and with MongoDB leading the way, we can expect to see more innovative solutions that put patients first. If you're interested in learning more about radical interoperability with MongoDB, check out our brochure .

May 18, 2023
Applied

Aerofiler Brings Breakthrough Automation to the Legal Profession

Don Nguyen is the perfect person to solve a technology problem in the legal space. Don spent several years in software engineering before eventually becoming a lawyer, where he discovered just how much manual, administrative work legal professionals have to do. The company he co-founded, Aerofiler, takes the different parts of the contract lifecycle and digitises them to eliminate manual work, allowing lawyers to focus on things that require their expertise. Don says the legal profession has always been behind industries like accounting, marketing, and finance when it comes to leveraging technology to increase productivity. Both Don and his co-founder, Stuart Loh, thought they could automate a lot of manual tasks for legal professionals through an AI-powered contract lifecycle management solution. Turning mountains into automation Law firms generate mountains of paperwork that must be digitised and filed. Searching contracts post-execution can be an arduous task using the legacy systems most firms are running on today. Initially, Don, Stuart, and Jarrod Mirabito (co-founder and CTO) set out to make searching contracts and tracking obligations easier. As the service became more popular, customers started asking for more capabilities, like digitising and automating the approval process. Aerofiler's solution now manages the entire contract lifecycle, from drafting and negotiations to approvals, signing, and filing. Don says the difficulty with running AI to extract data is you can't usually see where the data is coming from, and you can't train your models, for example, to extract a concept that might be specific to your industry. Aerofiler supports custom extraction so firms can crawl for and find exactly the results they're looking for, and it highlights exactly where in the contract the data is found. Aerofiler is unique as a modern, cloud-based Contract Lifecycle Management solution that streamlines contract management processes and enhances workflow efficiency. It features AI-powered analytics, smart templates, and real-time collaboration tools, and is highly configurable to fit the unique needs of different companies. Aerofiler's user interface is also highly intuitive and user-friendly, leading to greater user adoption and overall efficiency. The startup stack Don has over 10 years of experience working with MongoDB and describes it as very robust. When it was time to choose a database for their startup, MongoDB Atlas was an easy choice. One of the big reasons Don chose Atlas is so they don't have to manage their own infrastructure. Atlas provides the functionality for text search, storage, and metadata retrieval, making it easy to hit the ground running. On top of MongoDB, the system runs Express.js, VueJS, and Node.js, also known as a MEVN stack. In choosing a database, Don points out that every assumption you make will have exceptions to it, and no matter what your requirements are now, they will inevitably change. So one of the key factors in making a decision is how that database will handle those changes when they come. In his experience, NoSQL databases like MongoDB are easy to deploy and maintain. And, with MongoDB offering ACID transactions , they get a lot of the functionality that they would otherwise look for in a relational database stack. How startups grow up Aerofiler is part of the MongoDB for Startups program, which helps early-stage, high-growth startups build faster and scale further. MongoDB for Startups offers access to a wide range of resources, including free credits to our best-in-class developer data platform, MongoDB Atlas, personalized technical advice, co-marketing opportunities, and access to our robust developer community. Don says the free credits helped the startup at a time when resources were tight. The key to their success, Don says, is in solving problems their customers have. In terms of the road ahead, Don is excited about ChatGPT and says there are some very interesting applications for generative AI in the legal space. If anyone would like to talk about what generative AI is and how it could work in the legal space, he's happy to take those calls and emails . Are you part of a startup and interested in joining the MongoDB for Startups program? Apply now .

May 17, 2023
Applied

Temenos Banking Cloud Scales to Record High Transactions with MongoDB Atlas and Microsoft Azure

Thank you to Ainhoa Múgica and Karolina Ruiz Rogelj for their contributions to this post. Banking used to be a somewhat staid, hyper-conservative industry, seemingly evolving over eons. But the emergence of Fintech and pure digital players in the market paired with alternatives in technology is transforming the industry. The combination of MACH , BIAN and composable designs enables true innovation and collaboration within the banking sector, and the introduction of cloud services makes these approaches even easier to implement. Just ask Temenos, the world's largest financial services application provider, providing banking for more than 1.2 billion people . Temenos is leading the way in banking software innovation and offers a seamless experience for their client community in over 150 countries. Temenos embraces a cloud-first, microservices-based infrastructure built with MongoDB, giving customers flexibility, while also delivering significant performance improvements. Financial institutions can embed Temenos components, like Pay-as-you-go, which delivers new functionality to their existing on-premises environments, on their own cloud deployments or through a full banking as a service experience with Temenos Transact powered by MongoDB on various cloud platforms. This new MongoDB-based infrastructure enables Temenos to rapidly innovate on its customers' behalf, while improving security, performance, and scalability. Fintech, payments and core banking Temenos and MongoDB joined forces in 2019 to investigate the path toward data in a componentized world. Over the past few years, our teams have collaborated on a number of new, innovative component services to enhance the Temenos product family, and several banking clients are now using those components in production. However, the approach we've taken allows banks to upgrade on their own terms. By putting components “in front” of the Temenos Transact platform , banks can start using a componentization solution without disrupting their ability to serve existing customer requirements. From May 2023 onwards, banks will have the capability to deploy Temenos Infinity microservices as well as the core banking Temenos Transact exclusively on the developer data platform from MongoDB and derive even more value. Making the composable approach even more valuable, Temenos implemented their new data backend firmly based on JSON and the document model . MongoDB allows fully transparent access to data and the exploitation of additional features of the developer data platform. These features include Atlas Search , application-driven analytics , and AI through workload isolation. Customers also benefit from the geographic distribution of data based solely on the customer requirements, be it in a single country driven by sovereignty requirements or distributed across continents to ensure always-on and best possible data access and speed for trading. Improved performance and scale In contrast to the retail-centric benchmark last year , the approach this time was to test broader functionality and include more diverse business areas – all while increasing the transaction volume by 50%. The benchmark scenario simulated a client with 50 million retail customers, 100 million accounts and a Banking-as-a-Service (BaaS) offering for 10 brands and 50 million embedded finance customers on a single cloud instance. In the test, Temenos Banking Cloud processed 200 million embedded finance loans and 100 million retail accounts at a record-breaking 150,000 transactions per second. In doing so, Temenos proved its robust and scalable platform can support banks’ business models for growth through BaaS or distributing their products themselves. The benchmark included not just core transaction processing, but a composed solution combining payments, financial crime mitigation (FCM), a data hub, and digital channels. "No other banking technology vendor comes close to the performance and scalability of Temenos Banking Cloud. We consistently invest more in cloud technologies and have more banks live with core banking in the cloud than any of our peers. With global non-cash transaction volumes skyrocketing in response to fast-emerging trends like BaaS, banks need a platform that allows them to elastically scale based on business demand, provide composable capabilities on-demand at a low cost, while reducing their environmental impact. This benchmark with Microsoft and MongoDB proves the capability of Temenos’ platform to power the world’s biggest banks and their BaaS offerings with hundreds of millions of customers, efficiently and sustainably in the cloud." Tony Coleman, Chief Technology Officer, Temenos This solution landscape reflects an environment where everyone on the planet runs two banking transactions a day on a single bank. This throughput should cater to any Tier 1 banking deployment, in size and performance, and cover any future growth plans that they have. Below are the transaction details that comprise the actual benchmark mix. As mentioned above it is a broad mix of different functionalities behaving like a retail bank and a fintech institute, which provides multiple product brands, e.g. cards for different retails. Besides the sheer performance of the benchmark, the ESG footprint of the overall landscape shrunk again versus last year’s configuration as the MongoDB Atlas environment was the sole database and no secondary systems were required. Temenos Transact optimized with MongoDB The JSON advantage Temenos made significant engineering efforts to decapsulate the data layer, which was previously stored as PIC, and make JSON formatted data available to their user community. MongoDB was designed from its inception to be a database focused on delivering a great development experience. JSON’s ubiquity made it the obvious choice for representing data structures in MongoDB’s document data model. Below you can see how Temenos Transact stores data vs Oracle or MSSQL vs MongoDB. Temenos and MongoDB have an aligned data store – Temenos Transact application code operates on documents (JSON) and MongoDB stores documents in JSON in one place, making it the perfect partnership. MongoDB enables the user community through its concept of additional nodes in the replica set to align further secondary applications integrated into the same database without interrupting and disturbing the transactional workload of Temenos Transact. The regular occurring challenge with legacy relational database management systems (RDBMS) where secondary applications suddenly have unexpected consequences to the primary application is a problem of the past with MongoDB. Workload Isolation with MongoDB MongoDB Atlas will operate in most cases in three availability zones, where two zones are located in the same region for pure availability and a single node is located in a remote region for disaster recovery. This environment provides the often required RPO/RTO “0” while delivering unprecedented performance. Two nodes in each of the first availability zones provision the transactional replica set and ensure the consistency and operation of the Temenos Transact application. In each availability zone, a third isolated workload node is co-located with the same data set as the other two nodes but is excluded from the transactional processing. These isolated workload nodes provide capacity for additional functionalities. In the example above, one node provides access to the MongoDB Atlas Federation and a second node provides the interface for MongoDB Atlas Search. As the nodes store data in near real-time – replication is measured in sub milliseconds as they are in the same availability zone – this allows exciting new capabilities like real-time large language model (LLM), e.g. ChatGPT, or machine learning connecting to a Databricks lake house. The design is discussed in more detail in this article . The below diagram shows a typical configuration for such a cluster setup in the European market for Microsoft Azure: one availability zone in Zurich, one availability zone in Geneva, and an additional node out of both in Ireland. Additionally, we configured isolated workloads in Zurich and Geneva. MongoDB Atlas allows the creation of such a cluster within seconds, configured to the specific requirements of the solution deployed. Typical configuration for a cluster setup for the European market for Microsoft Azure Should the need arise, MongoDB can have up to 50 nodes in a single replica set so for each additional isolated workload, one or more nodes can be made available when and where needed. Even at locations beyond the initial three chosen! For this benchmark the use of a MongoDB Atlas cluster M600 was utilized which was oversized based on the CPU utilization of 20-60% depending on the node type. Looking backward a smaller MongoDB Atlas M200 would have been easily sufficient. Nonetheless MongoDB Atlas delivered the needed database performance with one third of the resources of last year's result, but delivering 50% more throughput. Additionally MongoDB Atlas performed two times faster in throughput per transaction (measured in milliseconds). Signed, sealed, and delivered. This benchmark gives clients peace of mind that the combination of core banking with Temenos Transact and MongoDB is ready to support the needs of even the largest global banks. While thousands of banks rely on MongoDB for many parts of their operations ranging from login management and online banking, to risk and treasury management systems, Temenos' adoption of MongoDB is a milestone. It shows that there is significant value in moving from a legacy database technology to MongoDB, allowing faster innovation, eliminating technical debt along the way, and simplifying the landscape for financial institutions, their software vendors, and service providers. PS: We know benchmarks can be deceiving and every scenario in each organization is different. Having been in the benchmark business for a long time, you should never trust just ANY benchmark. In fact, my colleague, MongoDB distinguished engineer John Page, wrote a great blog about how to benchmark a database . If you would like to learn more about how you can use MongoDB to move towards a composable system, architecting for real-time adaptability, scalability, and resilience, take a look at the below resources: Componentized core banking built upon MongoDB Tony Coleman, CTO at Temenos and Boris Bialek, Global Head, Industry Solutions at MongoDB discuss the partnership at MongoDB World 2022 Remodel your core banking systems with MongoDB

May 9, 2023
Applied

Ready to get Started with MongoDB Atlas?

Start Free