28.11.2019 / News Team

Last spring, Bilot and The National Audit Office of Finland (NAOF) developed the Risk Detector tool as part of the BilotGo hackathon as an aid to auditors. Risk Detector has attracted plenty of international attention during the course of the year. Many countries struggle with similar challenges in the monitoring of government spending, and Risk Detector has been presented for example to the Supreme Audit Offices of Sweden, Norway and Brazil and to representatives of the European Court of Auditors.

Risk Detector is a tool which utilizes network analysis and artificial intelligence to help design and implement audit work. The tool visualizes government procurement networks and uses artificial intelligence to highlight vendors with profiles that stand out in some way. In this way, with the help of artificial intelligence, auditors can gain new perspectives on the movement of public money.

“The tool gives us the opportunity to look at the subject from multiple angles, and that’s exactly what we were looking for,” praises NAOF project advisor Jasmin Grünbaum.

Risk Detector’s victory march is now continuing across the ocean, as Bilot and NAOF have been invited by the Comptroller General of the Republic of Peru to present the tool at the International Annual Conference for Integrity in Lima on December 3rd. Bilot’s Data Scientist Lauri Nurmela and NAOF’s Jasmin Grünbaum will discuss Risk Detector as part of a panel discussion on bribery and collusion control and detection mechanisms for public officials.

“It is clear the ideas implemented so far are just scratching the surface. The approach we have developed can be applied to so many different contexts, which I’m sure was also noticed by the Peruvians when we introduced the tool. It could even be applied to detecting and preventing criminal networks or black market activity, for example,” muses Lauri.

Case Risk Detector hackathon team after their BilotGo win.
Case Risk Detector hackathon team after their BilotGo win.

12.11.2019 / Nhu Kangasniemi

Axios: “Promise based HTTP client for the browser and node.js”

React-router: “React Router is a collection of navigational components that compose declaratively with your application.”

 

In this article, I will utilize the powerful feature of Axios called Interceptors to intercept requests/responses before they reach the next stages (then() and catch()). I assume you have basic knowledge of Axios, React and React-router before continuing reading since I will abstract the fundamentals away.

 

How does Interceptors work?

 

Src: Axios Github page

 

From picture above, you can see there are two types of interceptors for request and response.

  • Request Interceptors: 2 callback functions. First function is used to config your request before it is sent to the server. Config object includes option to modify your headers and authentication is a good use case for it. You can get accessToken from your localStorage, then attach it to your headers with this line of code: config.headers[‘Authorization’] = ‘Bearer ‘ + token;
  • Response Interceptors: 2 callback functions. A common use case for responses received from the server is to inject your own error handlers inside the response interceptors.

Let’s take a look at this example.

In the main index.js file, I have imported Router from ‘react-router-dom’ to create my custom history. <Router> is a common low-level interface, which suits the purpose of synchronizing history with Redux.

 

 

Inside the response interceptors, I want to add toast notifications to the user interface whenever I got error responses from server such as “Network Error” or 500. Axios error message only includes status code by default, which might not be user-friendly in many cases. Another useful scenario is that you might wanna use react-router to route your users to notfound page if server returns 404 or 400 status. It’s a way to make errors more meaningful to you as well as your users.

And that’s how you implement react-router outside a react component!

 


30.10.2019 / Esa Vanhanen-Varho

A couple weeks ago Sulzer announced their new wireless condition monitoring system, Sulzer Sense. The service connected new hardware devices created by Treon to the existing SAP customer portal. Our team was responsible for designing and creating the backend to Azure. Naturally, when we are dealing with hardware under development, there came moments when we had to be very agile. But Azure as a platform proved once again to be very flexible.

The rest of this story concentrates on the part I know best – Azure. But naturally very big praise goes also to our SAP CX (E-commerce including Hybris that has been renamed lately) and mobile app teams together with analytics experts for making Sense (pun intended) of the raw…or a bit processed data. And thanks also to all the people at Treon and Sulzer – this was a fun journey together!

TL;DR

Below is the architecture which the rest of the blog discusses. And last week at SAP Finug event the team received the best possible praise from the customer: “If you are looking for partner who can handle SAP ERP, Hybris and Azure, Bilot is the natural choice.”

Solution is built on Azure PaaS & Serverless components which can be easily scaled based on usage. Actual compute section (Functions) scales automatically. And what already proved the scalability was the journey of implementing the solution. There is no maintenance overhead of any virtual machines etc. Some scaling options require currently manual operations, but on the other hand, they are related to growing service and easily monitored metrics & alerts. Naturally, this could be automated too, if the system load would be more random.

If you are interested in this kind of solution, we’d be glad to discuss with you here at Bilot.

Azure Architecture - Sulzer Sense

 

Developing architecture

In the beginning there were multiple choices to be made. Devices are not continuously streaming data but sending samples periodically or when an alert threshold is exceeded. Naturally we had a good idea on the number and size of the messages that the devices would be sending, but not actual data at hand.

Our main principle was that the data processing should be as close to real time as possible – even though this was not a direct requirement. Scheduled frequent runs would have been good enough. But then we would have had to think about how to scale the processing power up when more devices and data needs to be processed. So we decided to use serverless Azure Functions for the actual processing and let the platform scale up as needed and have one less thing to worry about.

The devices communicate to Azure through a separate gateway node that is connected to Azure IoT Hub service. In the first phase device provisioning was manual, but that is currently being automated in the second phase. Also, as only the gateways are directly connected to Azure, the number of gateways is not growing as fast as number of devices behind.

Devices send different types of messages, so next step was to separate these with Stream Analytics service. We had decided to use Cosmos DB as a temporary data storage, because that integrates well with Stream Analytics and Azure Functions and gives us a good access to debug the data. We were unsure in the beginning that would we actually need Cosmos DB in the final solution, but this proved to be exactly what was needed…

Surprise!

When we started receiving the data from devices under development, the first thing we noticed (and had misunderstood from the specifications) that the messages are actually fragmented – one whole message can consist of tens of message fragments. This is the reason why I, an old grumpy integration guy, insist on getting real data as early as possible from a real source – to spot those misunderstandings ASAP. But due to the nature of this development project this just wasn’t possible.

The Wirepas Mesh network the devices use has a very small packet size and the gateway just parses and passes all fragments forward. There is also no guarantee that the fragments are even received in order.

This was a bit of a surprise – but finally not a problem. As we had decided to stream all data to Cosmos DB for debugging, we already had a perfect platform to check and query on the received fragments. We could also use out-of-the-box features like Time to Live (TTL) to automatically clean up the data and prevent id overruns. Some test devices sent data much more often than planned, so this gave us a good insight to system loads. Thus we also used a couple of hours to manually optimize Cosmos DB indexing, after which the request unit consumption (read: price of Cosmos DB) dropped about 80 %.

Now that the data was streamed to Cosmos we tied Azure Functions to trigger on each received message – which turned to be every message fragment. What would be the cost implication on this. Well – only minor. First, we optimized the calculation in a way that we start processing only if we detect that we have received last fragment of a data packet for those data streams where only full data is useful. So most of the processing stops there. Actual calculation & storing of the data happens only for full valid packages. When we measured the used time and memory for the operation, we saw that 1 million message fragments would cost about 0,70 € to process – true power of the serverless! So we are basically getting compute power for free, especially when Functions offer quite a lot of free processing each month…

Naturally we had to add scheduled retries for those rare scenarios where packet ordering has changed. There we used Azure Storage Queues to store events we have to recheck later. Retries are processed with scheduled Azure Functions that just retrigger those messages that couldn’t be completed with first try by updating them in Cosmos DB.

In principle, this work could also be moved to IoT Edge as pre-processing there before sending to Azure would lower the message counts on IoT Hub a lot. In this case, hardware development schedule didn’t make it possible to place IoT Edge on gateway devices on the first phase.

Long term storage

Long term storage was another thing we thought a lot. Cosmos DB is not the cheapest option for long term storage, so we looked elsewhere. For scalar data Azure SQL is a natural place. It allows us to simply update data rows even if scalar data fragments arrive with delays (without having to do the same fragmentation check that stream data uses) and also make sure that some rolling id values are converted to be unique for very long term storage.

During development we naturally had to load test the system. We generated a stored procedure to simulate and create a few years worth of this scalar data (a few hundred million lines). Again this proved to be a great reason to use PaaS services. After we saw that our tester would run for a couple of days in our planned database scale, we could move to a lot more powerful database and finish the test data creation in a few hours – and scale back again. So we saved two development days and finished the data creation in one night by investing about 30 euros.

Part of the data is just a sequence of calculated numbers by the devices themselves. These were not suitable for storing in relational database. We could have used Cosmos, but as discussed, the price would have been probably a bit high. Instead, we decided to store this kind of data directly to Blob Storage. Then it was just a question on how to efficiently return the data for the customer requests. And that could be easily solved with creating a time based hierarchy on the blob storage to easily find the requested data. At least so far it works perfectly even without using Azure Search, which is kept as an option for later use, if needed.

Both long term storages can be easily opened up for data analysts and machine learning use cases.

API for users

So far we have discussed only what happened to the incoming data. But how to expose that to the end users then? In this case we had the luxury of already having a customer portal ready, so all we needed to do was to create APIs for the backend to get the data in a secure way. This was accomplished with API Management in front of more Azure Functions. API Management is also used as a proxy when we need to query some data from the portal’s data API to give serverless Functions platform a single outgoing IP address.

All the database queries, blob searches and different calculations, unit and time zone conversions are executed in the serverless platform, so scaling is not an issue. Functions may occasionally suffer from cold starts and have a small delay for those requests, but there is always the Premium Plan to mitigate that – so far there has been no need for this.

We have also planned for the use of Azure Cache for Redis if we need to speed up data requests and see from usage patterns that some data could be prefetched in typical browsing sessions.

Conclusions

What did we learn – at least we once again confirmed that using platform services is very suitable for this kind of work and leaves so much effort to more important things than maintaining infrastructure. It’s just so easy to test and scale your environment, and even partially automated.

Also the importance of tuning Cosmos DB was clear – that couple of hours tuning session was easily worth a few hundred euros per month in saved running costs.

And that the architecture is never perfect. New services appear and you have to choose the tools you use based on one point in time balancing between existing services and if you dare believe that some services come out of preview in time. But the key here is to plan everything in a way that can be easily changed later, if – and when – more efficient service comes available.

(PS. And really, always prioritize getting real data out of the systems you are integrating as early as possible if you can)


16.10.2019 / Nhu Kangasniemi

This article is a follow-up blog from my previous one for authentication with Azure AD in the front-end.

Link to post: https://www.bilot.fi/authentication-with-azure-ad-using-msal-react-app/

 

Src: Microsoft Documentation

 

Above is the process of securing your web app and web APIs using Azure AD. First, you need to register your Web APIs with Azure to make sure that it doesn’t allow anonymous requests. User then logins to authenticate him/herself and requests access token from Azure AD. Azure AD will provision access token for authenticated users, then you write codes to attach token to header before users call any APIs.

In your React app,  create a separate file for calling APIs, then import msalApp from ‘auth-utils’. msalApp is an object instance of UserAgentApplication, which comes with the built-in methods like getAccount() and acquireTokenSilent().  GetAccount() returns the account object, which contains account info, after user successfully login. Check if there is account info, then call acquireTokentSilent to acquire access token from cache or from hidden frames if not available before every call to API. Last but not least, attach the token in your request header.

 

import axios from 'axios'
import { msalApp, GRAPH_REQUESTS } from './auth-utils'
//Call AcquireTokenSilent to acquire token
async function getToken() {
  // grab current state
  const account = await msalApp.getAccount()
  const token =
    account && (await msalApp.acquireTokenSilent(GRAPH_REQUESTS.LOGIN))
  return {
    headers: {
      Authorization: `Bearer ${token.accessToken}`, // the token is a variable which holds the token
    },
  }
}
export const get = async (path, params = {}) => {
  const item = await getToken()
  return axios.get(`${BASE_URL}${path}`, item, params).then(res => res.data)
}
export const getUserInfo = () => {
  return get(`User`)
}

 

For further instruction on how to secure your APIs in Azure using OAuth 2.0 protocol, please follow the link here:

https://docs.microsoft.com/en-us/azure/api-management/api-management-howto-protect-backend-with-aad


30.09.2019 / Toni Haapakoski

My past experience from over 20 financial and operational planning projects in Finland has shown that driver-based planning created by different business functions is rarely used to full extent. Most commonly, revenue is recognized from sales volume and sales price. Cost Centre personnel costs are often calculated from headcount and added with automated side costs. These are of course the main contributors to P&L, but it is still lacking connection to activities performed and the capacity required to fulfill the business targets.

Some recognized challenges with budgeting

With the target setting process, I’m referring to the annual budgeting process in which the high-level targets are usually set first. For example, the target for revenue may be + 12 % and for EBIT 10 % in comparison to previous year actuals. After this high level target setting (TOP), the allocation for the organization’s business units and departments budgets will be done (DOWN).

This does not mean that all the business units and departments should have the same targets, but the aggregate total should be.  One reason why budgeting takes such a long time often – even 3 months (Sep-Dec) – is that departments first estimate all their revenue and cost details into P&L, and when rolled up to an aggregate e.g. Business Unit and Enterprise level, the figures don’t match with the set high-level targets. It may also be that high level targets do not yet exist, or they might change and another too-detailed budgeting round will start again.

Another reason is that there is no direct connection from financial targets to business targets – something that an Enterprise, Business Unit and department are trying to achieve. These business targets can be e.g. penetration to new markets, increasing market share, developing and launching new products and services, growing brands, or strengthening partnerships. Businesses should realize what activities and resources are needed in order to achieve the business targets.

Let’s take the above example, where new financial revenue target was set to +12% compared with the previous year actuals. In order to improve sales, some activities e.g. more sales visits have to be done.

With the current customer lead conversion rates, it can be estimated that the number of sales visits has to be increased by 50%. With the current capacity, a.k.a. available salespersons, this cannot be achieved. More salespersons need to be hired, and eventually their salary costs will land to P&L personnel costs.

This is a closed loop scenario, where business drivers (# Sales visits, lead conversion rate-%, # FTEs) determine how the targets can be achieved, how much resources it will require, and how much it will cost.

Another example presented from another angle: If the financial target is to cut e.g. ICT costs by 20% or by 2.000.000€, this will then set the target amount for ICT resources: Headcount (#FTE), personal computers (#PC), office space (#m2) etc. from which the costs are occurring.

Reducing the ICT capacity will finally impact ICT service levels e.g. how many incidents can be solved? Is there onsite support available due to lack of office spaces? This kind of a closed loop planning process brings better understanding compared to pure financial budgeting about what has to be done in order to achieve something and what is the impact on business processes.

With forecasting, the planning process and objective is different

While in budgeting, the focus is on top-down target setting for one fiscal year, forecasting aims to bottom-up predict if the targets can be achieved and also what are the possible future outcomes (e.g. best and worst scenarios) within the next 12-16 months or even in the longer term.

The latter is called scenario planning and it can be fit to a purely financial planning context, but it still requires input from Business Units about their business environment, external drivers impacting on it, and the possible impact on business operations’ internal drivers and financials. Examples of external events that a company has no control on, but which impact business performance: Legislation, taxation, currency exchange and interest rates, loan market, competitors, economic growth, inflation and political crises.

The main purpose of scenario planning is to be able to understand magnitude and direction of changes in the market and to react faster. The closed loop model, where both internal and external key performance drivers have been linked to financials, provides a quick and easy way of getting a holistic view of an enterprise from both business and financials perspectives.

holistic enterprise view

 

Benefits of having driver-based planning in addition to financial planning

  • Easier to understand, explain and plan – more accurate forecasts – better quality and better informed decisions
  • Faster to update when changes in the business environment occur – helps to understand and react to external events
  • Increases the awareness about what drives the revenues and costs – Key Drivers of the Business
  • Enables Scenario and What-If planning
  • Facilitates more collaboration between the functions and organizations and increases the responsibility for and ownership of results

 


25.09.2019 / Nhu Kangasniemi

While working on a project applying React using create-react-app in the frontend, .Net Core 2.2 in the backend and AzureAD for authentication, I find it extremely hard to search for a ready-made solution how to make it work properly in a React app. I know there are lots of articles about using ADAL but the trend is moving towards MSAL.

For the sake of clarity, this article will focus heavily on implementation of MSAL (Microsoft-Authentication-Library-For-JS) to facilitate authentication of users and get access token from Azure AD.

(Prefer to know what are the differences and why you should choose MSAL over ADAL? Read more here: https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-compare-msal-js-and-adal-js)

 

1. Installing MSAL using npm

I assume you have already registered your application in Azure AD. In order to use MSAL, you install it first by running this command below:

npm install msal

 

2. Implementation

Next up, in the src folder, create a file named auth-utils.js. Inside this file, you can create instance of UserAgentApplication and configure it with your clientId from Azure AD and scopes correctly depending on which environment you’re running using if/else statement. You’re free to choose between sessionStorage or localStorage when it comes to caching.

 

The main idea is to create a HOC (High Order Component) to wrap around your App Component. This AuthProvider HOC contains all of the logic for login, logout and acquire token silently without prompting the user (using hidden frames). In this example, only redirect login is applied but you can modify to switch it to popup login if needed. In componentDidMount life cycle method, call onSignIn() method to redirect user to the login page.

 

Import AuthProvider and wrap it around your App. So now, App component has access to the props passed down to it by HOC AuthProvider. When users first enter the page, they have to login in order to use your App. Successful login will return accountInfo as a prop to your App. Then you can use it to consider rendering logout button and call onSignOut() function once the user clicks on Signout button.

 

It’s pretty much a wrap for implementing authentication with Azure AD in the frontend. Next up, I will show you how to call your APIs with Authorization Header using Access Token and to get a new token silently (with no page reload) when it expires. See you soon!


19.09.2019 / Nhu Kangasniemi

When it comes to internationalization of your project, the hassle of choosing the right tool to start with can cause you some headache. This article is just right for you if you’re looking for a powerful solution but yet simple to implement to get your project translation up and running quickly. React-I18next is a localization framework specialized for React and React Native.

 

1. Where should I start?

 

First thing first, make sure you’re in the right folder and install i18next packages. In your terminal, run the command above. After that, open your package.json file and check in the dependencies if you have both packages. I include here my version of “i18next”: 15.0.7 and “react-i18next”: 10.5.2 just to make sure you have the same working versions.

 

2. Configuration File

Create a file named i18n.js in your src folder and use the following code.

 

i18next provisions users with great features such as browser language detector option. However, in this context, I prefer to set the language for user based on user info I receive back from calling user API so I have commented the LanguageDetector out.

 

3. Translations keys

• Create a folder called locales and create a json file for each language you need in the project.

• Inside the json file, you create different namespaces and key-value pairs for each translation key. In the example, I have two namespaces which are “common” and “modal”. In the newModal, I have the header key with the value containing variable called “item” with the format lowercase (set up format in the configuration file).

 

• When you add a json file for a new language, make sure it has the same format with the same namespaces and keys, the only difference stays in your value with different values for different languages.

 

4. Usage

Now when you have all the boilerplates, let’s get our work done. There are a few ways you can use i18next and below is to demonstrate you with the use of HOC. First you import withTranslation HOC from react-i18next, then you load single or multiple namespaces as params of withTranslation then use it to wrap your component. From there, the Header component gets access to t function and i18 instance via props. Use t function to translate the content by specifying the namespace and the key.


As mentioned, it’s simple and quick. I will link here react-i18next documentation for your further use cases.

https://react.i18next.com


19.09.2019 / Pekka Tiusanen

What Do I Need and How Will It Fit?

What tools for AI/ML should be adopted in my organization and how to integrate advanced analytics in data architecture? Implementations are driven by business cases with various technological requirements. There are plenty of options in the market and different SaaS products have characteristic strengths and weaknesses. Existing architecture is a significant factor in decision making as well.

Limited Use of AI/ML within Reporting Tools

Although programming language support and AI/ML capabilities exist for reporting tools, there are certain limitations and hindrances to them. For example, writing R scripts in Tableau requires one to adopt a product-specific workflow and programming logic.

Reporting software can still be utilized to produce small-scale solutions. One of the common use cases for advanced analytics in reporting is key figure forecasting. URL-based integration also allows to embed AI/ML applications in reporting. For example, interactive Shiny apps ® dashboards can be included in Tableau reports. However, these are minimal implementations.

Shortcomings of Graphical AI/ML Tools

Graphical AI/ML utilities, such Azure ML Studio and RapidMiner, are a step up from reporting tools, but they still lack flexibility that is necessary to fulfil large-scale production requirements. Despite having support for R and Python, this is not the standard way to use graphical tools for AI/ML, which reflects to associated usability.

When it comes to training workloads, adding a powerful computation engine on top of other features has not been sufficient for RapidMiner to remain relevant. This is partially because the industry is taken over by end-to-end design and seamlessly conacatenated cloud products from source to consumption.

Finally, mere REST API model deployment without scalability is often not good enough for real-time implementations. On the contrary, IaaS-based solutions for scaling are too tricky to maintain for many organizations. Such solutions also require extra DevOps programming work compared to standardized cloud products for the purpose.

Microsoft Azure Cloud Platform for Scalability, Power and End-To-End Features

Cloud-based programming environments have invaded the AI/ML scene. These products provide calculation power, scaling features and end-to-end readiness. Model training may necessitate a true computation cannon to be swift enough. Furthermore, it is sometimes required for models to be consumed by X thousands of users with a minimal response time. Reasons to prefer such SaaS or MLaaS (machine learning as a service) solutions over custom applications include cloud platform compatibility, ease of maintenance and standardization.

AI tools

Note: Model training in Spark is available for Azure ML Service. However, Databricks a more comprehensive tool for AI/ML development work.

Azure Databricks – Where Data Science Meets Data Engineering

Demand for large scale training computation loads can be met by employing a Spark-driven tool called Azure Databricks. It supports role-based access control and allows data scientists, data engineers and other people involved to collaborate in advanced analytics projects. Developers can write R, Scala, Python and SQL in Databricks notebooks. The resulting AI/ML modelling pipelines can be scheduled by using Azure Data Factory. Version control is typically managed through Azure DevOps.

Note: Databricks is available in AWS too. 

Real-Time Scenario

Scalability requirements for a large user base and stable real-time performance can be addressed by adding Azure ML Service to the chain of sequentially employed cloud products. A typical way to do this would be deploying the solution to Azure Container Service. Kubernetes clusters are often employed in this scenario, but other deployment targets are supported too. Additional custom features can be built inside the web service.

Batch Scenario

If real-time responses are not required by the AI/ML business case, Azure building blocks can be used to generate a batch forecasting pipeline. This is a common scneario where Azure Databricks trains the AI/ML model and writes a batch of forecasts to a pre-defined table. Once again, Databricks workloads can be scheduled with Data Factory. The forecast table is consumed by a reporting tool, such as Microsoft Power BI.

Concluding Remarks

Although AI/ML development is business case driven, cloud environment for POCs and production solutions is also a strategic asset. Modern cloud-hosted solutions provide means to build production ready advanced analytics pipelines. They can also be extended with additional features stemming from business needs and technological landscape. Some of the early AI/ML adopters may have to shift from custom IaaS solutions towards end-to-end cloud platforms to ensure architecture viability in the long-term.


3.07.2019 / Guest Writer

Today, companies do not only want to measure how things are going in their business, but also how customers and employees experience their interactions with the company. Experience Management is a topic which is getting more and more attention. It is a crucial topic for any business who wants to meet its customers with better experiences – and continuously develop better products and stronger brands.

To get to that level of experience management, companies need to go beyond the business information that is traditionally handled in the business systems, i.e. operational data (O-data).

Platforms to collect valuable experience data (X-data) from all types of contact points across customer journeys are needed, and a capabilities to measure, analyse and close the customer interaction loops are crucial, in order to manage experiences efficiently.

So, what’s new with O-data and X-data?

It is the ability to combine these different types of data and using it to help companies understand how they must leverage new insights to remain competitive. With operational data, businesses can find out what is happening in the business and with experience data why it happens.

But today there is still a clear experience gap in many companies, meaning that company internal views on their ability to delivery world class customer experiences often differ a lot when comparing with the perception of their customers (=experience gap).

Furthermore, large amounts of efforts are currently used for analyzing how the company is operating, whereas the majority of these efforts actually should be focused on understanding and analyzing  how the company is perceived.

Experience Management and the experience gap is therefore a very interesting topic to talk about! But how does this topic fit into the world of SAP and how can we support companies on this journey?

On the 4th of September we will be in Stockholm participating in the FutureCustomer Experience, Sales & Marketing together with our partner Bilot, so make sure you participate and come meet us there!

Stefan Fröberg SAP FinlandStefan Fröberg, Sales Manager, SAP Customer Experience