25.04.2017 / Ville Niemijärvi

Istahdin taas alas Lassen, Louhian johtavan Data Scientistin kanssa ja rupateltiin vähän mitä työvälineitä edistyneen analytiikan toteutustöissä tarvitaan.

Edellisessä kirjoituksessa käytiin Lassen kanssa läpi mitkä ovat Data Scientistin 3 tärkeintä taitoa. Noista taidoista voi jo vähän päätellä minkälaista työkaluosaamista pitää myös hallita.

Voit katsoa videon tästä tai tämän blogin alta.

Data Scientistin tärkeimmät työvälineet

Louhialla jo reilun 5 vuoden ajan kymmeniä data science -toteutuksia tehnyt Lasse luettelee ison listan tuotteita, joita käyttää päivittäin työssään.

  • ETL-työ, data integraatio ja muokkaus: SQL Server Integration Services
  • Tietokannat: SQL Server Management Studio
  • Mallintaminen, data science: R, Python, RapidMiner, Azure ML
  • Tiedon visualisointi, raportointi: QlikView, Power BI

Jokaisella tuotteella on oma roolinsa ja jotta saadaan parempi ymmärrys missä niitä käytetään, sijoittelimme ne alla olevassa kuvassa tyypilliseen business intelligence/analytiikka -arkkitehtuuriin.

Tyypillinen analytiikka-arkkitehtuuri ja data scientistin työvälineet

Tyypillinen tietovarasto/business intelligence/analytiikka -arkkitehtuuri
Tyypillinen tietovarasto/business intelligence/analytiikka -arkkitehtuuri

Kuvassa yllä näkyy tyypillinen arkkitehtuuri, joka on joko valmiina rakennettuna yrityksessä tai me rakennamme sellaisen osana analytiikka- tai laajempaa tiedolla johtamisen hanketta.

Alimpana on tietolähteet eli organisaation lukuisat operatiiviset järjestelmät, CRM:t, taloushallinto  jne. Sekä yhä useammin myös ulkoiset tietolähteet, joka avoimien rajapintojen kautta tai ostettuna 3. osapuolelta.

Tieto täytyy tämän jälkeen ladata analytiikkaa varten jollekin tallennusalustalle. Latauksen yhteydessä tietoa usein yhdistetään, muokataan, siivotaan. Tätä kutsutaan tietovarastopiireissä ETL-prosessiksi (extract-transform-load) mutta usein puhutaan vain tiedon integraatiosta.

Tässä Lasse hyödyntää lähinnä SQL Server Integration Serviceä (SSIS). Itse olen käyttänyt SSIS:n lisäksi Pentahoa ja IBM Cognos Data Manageria, joka on nykyään korvattu IBM Infosphere DataStagella.

Muita markkinoilta löytyviä tuotteita on mm. Informatica, Oracle Warehouse builder, SAS Data Integration Studio.

Tieto siis tallennetaan jollekin tallennusalustalle ja useimmiten se edelleen on relaatiotietokanta. Lassen tapauksessa useimmiten SQL Server jota hallitaan SQL Server Management Studiolla.

Big data (esim. Hadoop) ja NoSQL -tietokannat ovat yleistyneet ja niitä on asiakkaillamme mutta lopulta tieto on helpointa viedä relaatiokantaan, jotta sitä voidaan hyödyntää tilastollisessa mallintamisessa eli varsinaisessa data science -työssä.

Tällöin käyttöön otetaan mallinnustyövälineet kuten R, Python, RapidMiner tai Azure Machine Learning.

Muita markkinoilta löytyviä tuotteita ovat mm. SAS, Knime, SPSS, Amazon Machine Learning, IBM Watson.

Kun ennustemallit tai muu edistyneen analytiikan mallinnus on tehty, tulokset viedään usein visualisoituna liiketoiminnalle (jolleivät mene osaksi jotain operatiivista prosessia tai applikaatiota).

Tällöin käyttöön otetaan raportointi-, visualisointi- ja business intelligence -tuotteet. Lasse suosii näissä QlikView:tä ja Power BI:tä.

Muita asiakkaillamme yleisiä BI-tuotteita ovat mm. Tableau, Cognos, SAP ja Oracle.

Data Scientistin pitää hallita iso joukko tuotteita

Kuten yltä näkyy, ainakin konsulttifirmoissa Data Scientistin pitää hallita iso joukko eri tuotteita.

Totta kai isoissa projekteissa usein on omat erikoismiehet tietovaraston tai data laken rakentamiseen ja omansa tiedon visualisointiin.

Mutta prosessi menee todella hitaaksi jos data scientisti joutuisi joka kerta kun tarvitsee uutta data settiä tai muokkauksia olemassa olevaan, kutsumaan ETL-osaajaa avuksi.

Työ on niin iteratiivista, että on tehokkainta, että DS-roolissa pystytään ottamaan myös ETL-työ, tietokannat ja datan visualisointi haltuun.

Katso video kokonaisuudessaan alta. Muista laittaa Louhian Youtube-video seurantaan ja kommentoi rohkeasti jos haluat kuulla ja nähdä lisää analytiikka-asiaa meiltä.

 


20.04.2017 / News Team

Bilot - Hauskaa vappua!

Joko sinut on diagnosoitu? Vaivaako kenties digitaalinen deuteranopia, asiakaskokemuskooma, oppo-osteoporoosi tai vain pieni tekninen hikka?
Näihin ja moneen muuhun vaivaan tarjosimme lääkkeet Altian kanssa yhteistyössä rakentamaltamme pop-up&stempunk-tiskiltämme tämän kevään SAP Innovation Forum -tapahtumassa.
Olisiko vappuviikko sopiva ajankohta pienelle diagnoosille? Pohdi oma tai organisaatiosi digitaalisuuden taso ja löydä linkki lääkereseptiin alla olevalta listalta.

Bilot - Hauskaa vappua!

Digidiagnoosi: Toddler

Vahvaa suorittamista, muttei ihan vielä digitaalista. Mikä olisi pahinta, mitä voisi tapahtua, jos antaisit pilven palvella? Nyt rohkeasti rajapinnat auki!
Lääkesuositus: Jäämeri

Digidiagnoosi: Explorer

Olet ottanut digitaalisuuden ensimmäiset askeleet, onnittelut! Etsitkö kuitenkin edelleen uutta mannerta? Olisiko aika suunnata katse pilviin?
Lääkesuositus: Metsän kutsu

Digidiagnoosi: Challenger

Digitaalisuutesi valkoinen hansikas haastaa jo tehokkaasti kilpailijoitasi. Huomaatko kuitenkin joskus katselevasi haaveillen iltataivaan digikuuta? Olisiko aika tähdätä tähtiin?
Lääkesuositus: Vaahtopää

Digidiagnoosi: Invader

Olet valloittaja, todellinen digi-viikinki analogistien maassa. Seuraava taso on vain muutaman pilvipalvelun päässä.
Lääkesuositus: Poiju

Bilot - Hauskaa vappua!

Digidiagnoosi: Digital Star

Digitalisti, sinä olet tähti! Vain Bilot pystyy viemään sinua kohti uusia aurinkokuntia, kohti entistä valovoimaisempia voittokulkuja.
Lääkesuositus: Punainen kuu

Bilot - Hauskaa vappua!

Hauskaa vappua!

Lue myös:

Steam & defrosting at SAP Innovation Forum 2017
Kielipoliisin hekumeeninen pilveennousemus
CDO-barometri: Rohkeutta, ystävät!
Have you been diagnosed?


20.04.2017 / Mika Tanner

Curiosity drove us to launch the first ever Chief Digital Officer (CDO) Barometer in November 2016 with the Finnish sparse but rapidly growing population of CDOs as our target group. We managed to reach the vast majority of the CDOs who were equally eager to hear what their peers had to say.

Our core business is improving our customers’ competitive advantage in areas of digital customer acquisition, advanced customer engagement, e-commerce, digital services and sophisticated analytics. These themes have become more and more a boardroom discussion and during the past couple of years, digitalization has become a designated management discipline with the CDO at the helm.

What is it that keeps the CDO up at night?

Our angle to the inquiry was on one hand to get a better understanding of the CDOs’ role as opposed to the CIO. And on the other hand, what is it that keeps the CDO up at night – what are the burning issues they feel they need to resolve. The response rate was excellent and the findings were interesting indeed. Finnish CDOs regard themselves as change-agents, not IT managers; their mission is to navigate their companies through extensive digital transformation. Bilot’s mission is largely identical and we felt it our obligation to align ourselves with the CDOs’ agenda. Without doubt, we are technically very savvy, but without a strong business context we don’t feel our value proposition is fully exploited.

What is the role of CDO in Poland?

As Bilot operates in an international business environment and has launched a business entity also in Poland, extending the CDO Barometer to Poland came as a natural next step. One of the reasons that we entered the Polish market in the first place was that it is growing market and about five times in size compared to Finland. On the other hand, the degree of digital maturity is still an unknown to us and also whether the role of the CDO has begun to emerge as frantically as in the Finnish market.

Once we have carried out the CDO Barometer in Poland and analyzed the data, we will arrange a similar results broadcasting event we held in Helsinki. We intend to get the CDOs together and we aspire to provide an unilateral discussion platform for peer CDOs to exchange thoughts, be inspired and learn together.

Are you one of the first Polish CDOs? Please enter your thoughts into the Barometer.

Posts about the first CDO Barometer:


20.04.2017 / Mathias Hjelt

Long gone are the days when companies were likely to buy the entire stack of enterprise tools from their ERP software vendor, Mika Tanner pointed out in his recent blog post.

There has clearly been a switch from ERP-driven to best-of-breed software shopping and development. Adding to the equation, software buying power has switched towards business departments, and methodologies have shifted towards rapid and agile. The compound effect is that quick proof-of-concepts based on a broad range of technologies may spawn like mushroom in rain. Great success or architectural madness arises.

Many companies are still struggling with getting it right. In this post, I will highlight 5 topics to consider and pitfalls to avoid.

1. Be precise about division of responsibility and ownership of data

When a process gets sprinkled across multiple systems, each with different degrees of data sophistication, it can be difficult to choose which system should be responsible for executing certain tasks or leading a certain part of a process. Likewise, it can be difficult to decide which system should be the master of each data entity, and to which extent data should be replicated back and forth between systems.

Failure to address these topics with determination and clarity will cause problems in the short term (not getting the most out of your best-of-breed stack) or long term (e.g. data quality deteriorating instead of improving over time). Even if you don’t see the problems immediately, things may get out of control when you try to develop the next solution top of the jungle. (Putting an API on top of the jungle will not make the problem go away – see below)

My advice:

Be precise about the division of responsibility between systems. Be equally precise about ownership of data. Sounds simple, but many enterprises have serious challenges with this!

2. A simple API does not dissolve complexity

The startup world thrives on lean cloud apps interconnected over APIs. API talk also thrives among enterprise architects and salesmen. Best-of-breed buying is fueled by the assumption that anything and everything is easy to connect and extend, if there is an API.

Don’t overestimate the magic of APIs. APIs are terrific, but by no means a silver bullet to easily manageable architecture and walk-in-the-park projects. Publishing a complex process through a simple API does not make the complexity go away. It momentarily hides it from the people developing on top of the API. But in the bigger picture, there is still a full, hairy, complex stack of systems to govern and respect when you make changes to your processes.

My advice:

Build enterprise APIs, build enterprise software on top of APIs, but don’t forget that you are developing and maintaining full-stack solutions, which require full-stack governance and understanding. Choose partners and vendors who are up to the job.

3. The platform is the new monolith

Relying on the ERP as the center of everything is out of fashion, because it’s too monolithic. The same goes for building enterprise software too tightly bound to the ERP. There is plenty of buzz about how companies should build a digital platform as a foundation for rapid innovation. Buying software, add-ons or custom development that runs on the platform keeps your architecture coherent while helping you stay away from the evil ERP monolith.

But the risk is that your platform – be it a bare-bones cloud PaaS or a rich enterprise ecosystem in the cloud or low-code environment or a thin API layer – becomes the new monolith. If the platform sinks, down goes everything built on top of it. Many companies are currently facing massive re-implementation exercises as their “once best-of-breed but now sadly outdated” platforms are eroding, pushing massive chunks of enterprise software into sunset land all at once. Are the hot platforms of today less prone to future erosion? I don’t think so.

My advice:

Build on a platform when it makes sense, but always ask yourself: what happens the day when we want to or have to get rid of the platform? Can we re-deploy or do we need to re-write / re-purchase? Build independent, loosely coupled systems or services when your platform-monolith gut feeling warning bells go off.

4.Know your license terms

Software vendors want to make money. They want a huge share of your wallet. No surprise there. What may come to a surprise to some, is that e.g. an ERP system may come with licensing terms which are quite restrictive in terms of integrating with 3rd party systems. Recent news about a company having to shell out considerable amounts of money due to “indirect use” has brought more attention to this topic. Rightly so.

My advice:

Ask very precise and frank questions about licensing. Both from your 3rd party best-of-breed vendor, and from your ERP vendor.

5. Be bold enough to pull the plug

Lastly — quick Proof-of-Concepts are the stuff that agile business driven IT development is made of. Being able to try out new stuff without tedious planning helps you innovate faster.

But beware: PoCs which are set up in an ungoverned results-over-planning fashion, and never make it to a controlled transition to production at scale, also tend to become the stuff of enterprise architecture nightmares. Unless they are terminated at some point.

My advice:

Be bold enough to pull the plug on systems when appropriate. Get rid of solutions that never were designed for long-term, company-wide, future-proof usage. Alternatively, ensure that there is a solid path forward for the PoCs that you want to keep.


12.04.2017 / Karri Linnoinen

Every year Hortonworks, together with Yahoo, put on the DataWorks / Hadoop Summit. It’s a 2-3 day conference dedicated to Big Data and its technologies. This year it was my time to visit the summit, so I’ve compiled a quick summary.

#DWS17

DWS17 kicked off with an epic (to say the least) laser show.

IMG_3161

From the welcome keynote on Day 1, emphasis was on the Data itself. It’s not about Hadoop or the platform anymore, instead on how to create value from the data in your organisation or the data you have collected. That data is also the reason why the summit has been renamed from the “Hadoop Summit” to the “Dataworks Summit”. With the ability to process and use data in an entirely different way from times of old, new businesses will emerge from data.

“In today’s world, data is actually our product.”

Scott Gnau, the Chief Technical Officer at Hortonworks talked about how the Future of Enterprise is centered around four paradigms: Cloud computing, Artifical Intelligence, Internet of Things and Streaming Data. Many of these are already in use in organisations, Cloud computing especially. Artificial Intelligence, which in itself is a broader area, is getting a lot of traction as Machine Learning is becoming more accessible due to services like Microsoft Azure Machine Learning Studio.

As for the other keynotes on both mornings, the sponsor keynotes were a little hit-and-miss.

Delightfully, the last morning keynote on Day 1, by Dr. Barry Devlin, shook things up by outlining the fall of Capitalism, and how A.I will inevitably replace the factory worker. This is of course, if we continue on our present course. It was a very interesting take on the future of Big Data and life beyond it, considering the speed at which current and new technologies are developing. As technological progress increases at an exponential rate, a crash is almost inevitable. A some what morbid start to the summit you could say, but thankfully the presentation had a silver lining at the end — we are now at the turning point, where we can influence how the future turns out, and influence the steepness of the downward curve. Hopefully we are able to level it out and avoid Dr Devlin’s Skynet-esque future 🙂

Also on Day 2, the last keynote by Dr Rand Hindi, was a quick look into privacy issues in Cloud computing. With the introduction of personal voice-assistants like Amazon Alexa and Google Home, technology companies should be paying more and more thought to where consumers’ data is processed. Voice patterns are after all, just as unique as fingerprints.

Breakout Sessions

This year, as the focus was on data itself, you could see that many of the Breakout sessions were showcases of implementation by different companies. BMW, Société Générale, Lloyds Bank, and Klarna all showed how they’d leveraged Hadoop in their Big Data journey. Data Science was also in a big role at DWS17, as many of the customer showcases and Breakout Sessions had a Data Science theme.

Live Long And Process

Looking at the agenda for the two days at DWS17, you could see one thing jump out — Hive. Specifically Hive with LLAP. This was evident in the number of Hive (and LLAP) -specific Breakout Sessions. Apache Hive has been with the HDP stack for forever, and has been a staple part of many of our POC architectures at Bilot. Back in 2016, the launch of the Hive 2.0 LLAP Tech Preview made a lot of people happy, as the query speeds of Hive 1.x lacked the required punch, as well as missing full ACID support. Now with the newest version of the platform, LLAP is a reality (GA), and all the many sessions at DWS17 indicated it’s a big deal. Query times are reduced by an order of magnitude, which is definitely something to be excited about.

IMG_3096

LLAP also adds value to other newer technologies coming into the HDP stack. Druid, a new time-series optimised data store, can leverage LLAP’s parallel processing capabilities to speed up query times. I’m especially excited to test out Druid, as it will come bundled with HDP 2.6 and thus be deployable via Ambari blueprints. It’s currently in beta, but will hopefully mature quickly.

HDF

The Hortonworks Dataflow, powered by Apache NiFi, looked to be Hortonworks’ next big thing. Teradata for example has open sourced its new “data lake management software platform”, Kylo, which leverages NiFi for pipeline orchestration. Hortonworks’ DataFlow still requires a fair amount of infrastructure to run, but as its little brother miniFy (JVM-based version of NiFi) matures, I think the whole edge-node processing paradigm will take off in a completely different way. Especially when you can run NiFi on very resource-scarce systems.

But we’ll have to stay tuned.

HDP 2.6 and beyond

Funnily enough, the launch of the new major release of HDP and Ambari wasn’t hyped at DWS17 as much as I would have expected. Granted, there was a fair amount of buzz around its new features, but the focus definitely was elsewhere. That being said, it didn’t mean that the announcement wasn’t important. Many of the new, cool features are only available with HDP 2.6 and Ambari 2.5, so users will need to upgrade their existing systems to leverage LLAP and Druid, for example. I for one will definitely be doing some upgrading 🙂

Beyond the newest version of HDP, is Hadoop 3.0. It could be releasing as early as Q4/2017, and will bring improvements to resource management as well as container support (yay!). This will make Hadoop in itself more resource-aware, and mean better performance. The usage of Docker has exploded since its initial release four years ago, and some of the newer Hortonworks apps, such as Cloudbreak, already take advantage of the technology. So with the addition of container support to Hadoop, YARN could potentially control non-Hadoop services and applications deployed into containers.

In Summary

The Dataworks Summit is definitely something you need in your life, if Big Data is on your roadmap or you’re already knee-deep in it. I’m glad I went, since getting to talk to the developers and community members directly is invaluable.

Stay tuned for some blog posts on specific technologies related to what was showcased and discussed at DWS17. There are several key part of the new HDP release that can be discussed in greater length.

If you’re interesting in hearing about Bilot’s Big Data offering and how Hortonworks Data Platform can help your organisation, get in touch and let’s talk!


12.04.2017 / Ville Niemijärvi

Monet yritykset rekrytoivat data scientistejä ja analyytikkoja. Neuroverkkovelhoja ja deep learning -osaajia.

Mutta mitä konkreettista osaamista rekryltä pitäisi tällöin odottaa?

Tai mitä osaamista vastavalmistuneen, esimerkiksi tilastotieteen opiskelijan kannattaisi kehittää, voidakseen siirtyä työskentelemään data sciencen ja edistyneen analytiikan parissa?

Haastattelin Louhian data scientistejä, joilla on kymmenien ja taas kymmenien toteutusprojektien kokemus ja jotka myös kouluttavat suomalaisista uusia datatieteilijöitä.

Voit katsoa haastattelun tästä tai tämän kirjoituksen alta.

Käydään tässä läpi haastattelun keskeisimmät pointit.

Data Scientistin tärkeimmät taidot

Louhialla 5 vuotta työskennellyt, yksi maan kovimmista edistyneen analytiikan osaajista, Lasse Liukkonen yllättää hieman tiivistäessään osaamisvaatimukset:

  1. Tietokannat ja SQL
  2. Rajapinnat ja integraatiot (ns. ETL-osaaminen)
  3. Mallintaminen, sisältäen sekä
    1. tiedon mallintamisen että
    2. tilastollisen mallintamisen ja algoritmit

Haastattelussa Lasse tuo esille, että periaatteessa data scientistin työhön voi edetä, ilman tilastotieteen koulutusta. Lisäten kuitenkin, että haastavimmissa tapauksissa pitää kääntyä ammattilaisen puoleen. Tosin ilman osaamista haastavan keissin itsenäinen identifiointi voi olla vaikeaa.

Vastauksessa näkyy ehkä se, että Lasselle tupla-laudatur-maisterina (tilastotiede+matematiikka) ja erittäin kokeneena analyytikkona tilastollinen mallintaminen ja algoritmiosaaminen on ns. peruskauraa.

Useissa projekteissa Lassen kanssa mukana olleena, uskallan väittää että aika monelta IT/Controller/Business Intelligence -jantterilta putoisi hanskat alkumetreiltä jos törmäisi niihin mallinnuskeikkoihin mitä olemme tehneet.

Kun tehdään ennusteita liittyen ihmisten terveyteen ja turvallisuuteen, miljoonien eurojen tarjouskauppoihin tai ministeritasolta tulee mahtikäsky tehdä ennustemalli aiheesta X ja aikaa on 24h, itse ainakin haluan että taustalla on järeää tilastotieteen osaamista.

Mutta totta on, että suurin osa käytetystä ajasta menee tiedon muokkaamiseen, sen poimimiseen, siistimiseen, tutkimiseen ja datan kanssa puljaamiseen.

Itse algoritmien kanssa työskentelyyn, R tai Python koodaamiseen kuluu huomattavasti vähemmän aikaa.

Olli Leppänen, Louhian data scientist, nostaakin SQL:n eniten käytetyksi työvälineeksi.

Ja näitä taitoja ei juuri yliopistossa opeteta. Eli vinkiksi tilastotieteen opiskelijoille: täydentäkää tilastotieteen osaamista etenkin tietokannoilla (relaatio + nosql) ja SQL-kielellä.

Alan konkari, edistynyttä analytiikka jo yli 20 vuotta sitten ensimmäisen kerran tehnyt, Mika Laukkanen, täydentää vielä osaamisvaatimuksia:

  • (Liiketoiminta)ongelman ja datan välisen yhteyden ymmärtäminen. Kyky hahmottaa miten ja millaisen datan avulla ongelma voidaan ratkaista.
  • Mallinnus- ja menetelmäosaaminen (koneoppiminen, tilastotiede).
  • Käsien päällä istuminen. Etenkin kun tulee (odottamattomia) huipputuloksia, niin kannattaa tarkistaa datasettien sisältö sekä muodostus ja mallinnusprosessit n-kertaa, koska kyseessä voi hyvinkin olla tekemiseen liittyvä virhe (nimim. kokemusta on).

Huomasin vasta tänään, että Ari Hovilla on tarjolla “Data Scientist koulutusohjelma”. Sen ohjelma tukee hyvin edellä listattuja osaamisvaatimuksia.

Mukana on R:n ja machine learning -algoritmien lisäksi niin SQL:ää, liiketoimintatiedon mallintamista (data modeling) kuin Hadoopia.

Katso Louhian data scientistien haastattelu ja Louhian vlogin ensimmäinen osa alta.

 


12.04.2017 / News Team

HKScan has been awarded by SAP with the Silver SAP Quality Award in EMEA* region in the Cloud Innovation Category.

The SAP Quality Award is for project excellence, for building a solution that addresses best identified business needs, for customer commitment to the business case, requirement gathering process, implementation, change management and overall achievement of the set KPIs after the go-live.

“At HKScan, we wanted to renew our CRM processes, consolidate customer information and reduce manual reporting. We chose SAP Hybris Cloud for Customer to cover sales processes including new customer acquisition, customer and consumer services with the goal to increase sales, customer satisfaction and information transparency. Moving to a cloud based solution was a strategic decision to lower the TCO. Our implementation partner Bilot recommended the SAP Cloud Implementation Methodology and the Agile development model”, says Katri Metsämäki, Director Customer & Renewal Solutions, HKScan.

“In this project supported by Bilot, HKScan showed how high quality implementation projects should be done. They defined requirements clearly, engaged key stakeholders early on to actively participate, and finally define KPIs that allowed determining if the project was successful. The judges were especially impressed with the way the project team worked with the business to get alignment when the cloud solution deviated from the business requirements,” says Taira Tepponen, Country Manager, SAP Finland.

Bilot CEO Mika Tanner is proud of HKScan and their success: “We are excited to know that HKScan has received this prestigious award and for a project that Bilot implemented. We are especially delighted that the award was received in the Cloud Innovation Category. Bilot has innovation in its genes and also has former merits in this discipline. In the previous edition of the SAP Quality Award, Containerships, won a Bronze award with Bilot’s support at the Nordic level for the first ever Cloud for Customer solution in the Nordics. We are always excited to contribute to our customers’ successes and help them grow their business.”

About HKScan

HKScan is the leading Nordic food company. We sell, market and produce high-quality, responsibly-produced pork, beef, poultry and lamb products, processed meats and convenience foods under strong brand names. Our customers are the retail, food service, industrial and export sectors, and our home markets comprise Finland, Sweden, Denmark and the Baltics. We export to close to 50 countries. In 2016, HKScan had net sales of nearly EUR 1.9 billion and some 7 300 employees.

About SAP

As market leader in enterprise application software, SAP (NYSE: SAP) helps companies of all sizes and industries run better. From back office to boardroom, warehouse to storefront, desktop to mobile device – SAP empowers people and organizations to work together more efficiently and use business insight more effectively to stay ahead of the competition. SAP applications and services enable more than 345,000 business and public sector customers to operate profitably, adapt continuously, and grow sustainably. For more information, visit www.sap.com.

About Bilot

Bilot is a growing software and service company established in 2005. We build heavy-duty end-to-end solutions for our customers. We are inspired when we get to implement a complete solution, from intelligent user interfaces through customer insight to integrated ERP processes. We build tomorrow’s business environments today. We are known for our ability to recognize the most important innovations and for our readiness to implement them to the highest standards.
Bilot is owned by its employees and our offices are located in Helsinki, Finland and Poznań in Poland. The company generated revenues of 15 MEUR in 2016 and employs 100 of the most creative and clear-sighted thinkers in the sector.

Press contact
Mika Tanner, CEO
Tel: +358 40 544 0477
Email: mika.tanner@bilot.fi
Twitter: @MikaTanner

*) Europe, Middle-East and Africa
1) TCO: Total Cost of Ownership


11.04.2017 / Janne Vihervuori

Keskusrikoskielipoliisin kollegat virnuilivat viime viikonlopun virpomisreissullamme, että mikä ihmeen hekumeeninen? Vaikka pääsiäinen on yksi monen uskontokunnan tärkeimmistä vuosittaisista juhlista, edustaa se monelle varsin maallistunutta juhlapyhää: pääsiäispupuja tai tinder-munia, joillekin taas pelkkää pashaa ja Miedolle 40 astiallista mämmiä.

Hekumeenisuus tarkoittaa Bilotille teknologioiden ja arkkitehtuurien hekumallista ekumeniaa, samankaltaisia hyötyjä tuottavien alustojen päräyttävää yhteistyötä ja pyrkimyksiä läheisempään yhteyteen.

PILVENPALVELUS

Ylösnousemisen symboliikka, ruumiin ja sielun uudelleen yhtyminen, hekumenia, on meille Bilotilla toimialallamme yleisesti eri leireihin tulkittujen teknologioiden ja arkkitehtuurien yhtymistä ja kirkastumista. Bilotin veriarvoista on voinut alusta pitäen mitata pioneerihenkisyyttä ja omaleimaisuuden arvostusta, mikä on käytännössä näkynyt vakiintuneiden järjestelmien päälle ja ympärille lisäarvon tuottamisena, siinä missä muut markkinassa ovat tehneet lähinnä maanpäällistä ERP:iä.

Olemme aina yhdistelleet erilaisia palveluja, teknologioita, tuotteita ja alustoja siten, että ne ovat uudelleen yhtyneet kokoaan suuremmiksi kokonaisuuksiksi. Mantranamme on kaikki-kanavaisuus monikanavaisuuden sijaan – alati enemmän pilveen nousseena.

PILVESTÄ PALVELIMETTOMUUTEEN

Pääsiäisen aika on perinteisesti ollut pitkien IT-päivityskatkojen juhla. Sesongin suurien toiminnanohjausjärjestelmien päivityksissä massiivinen rakennelma on saattanut kirjaimellisesti herätä kuolleista kolmantena päivänä konesalissaan – pyhäpäivien yksikköhinnoilla. Onneksi elämme alati pilveistyneemmässä maailmassa, jossa riippuvuus maanpäällisistä rajoitteista vähenee.

On kuitenkin yksi uusi asia, josta Kielipoliisi haluaa muistuttaa jopa messiaaniseen sävyyn ajankohdan kunniaksi. Nimittäin palvelimeton, eli englanniksi serverless.
Palvelimettoman arkkitehtuurin tilanne on nyt sama, kuin pilven tilanne oli, jos kelaamme Ihmisen Pojan opetuslasten lukumäärän verran vuosia taaksepäin. Pilveä vastustettiin vanhurskaan periaateen vuoksi. Palvelimeton maailma tekee nyt samanlaista loikkaa, kuin pilvi aikoinaan.

Koska Kielipoliisi on aina tarkka detaljeista, muistettakoon, että kyllä siellä palvelimettoman arkkitehtuurinkin takana jossain palvelin on. Uutta on kuitenkin ruumiin ja sielun, palvelimen ja ajettavan toiminnon eli koodin, erottaminen, jota Bilot tulee pääsiäisen jälkeisinäkin aikoina hekumeenisesti uudelleen yhdistämään.

Hekumallisen hyvää pääsiäistä!

Kielipoliisin aikaisemmat ratsaukset:

Uuden vuoden digitaalistietotekninen Härkisjämä-bingo
Haasta haasteellisuus
Hankkeet hankeen
Kielipoliisi selvittää: Teollinen internet


6.04.2017 / Mathias Hjelt

Bilot has helped customers with SAP-integrated Magento B2B ecommerce for some years, but didn’t become a Magento partner until very recently. Obviously, the first thing to do as an official partner was to go “all in”, by attending Imagine 2017 in Las Vegas.

Magento Imagine is the biggest annual event for Magento customers, community members and independent software vendors to meet, learn and expand their horizons. This year, close to 3000 persons got together for 3+ days packed with tech deep-dives, customer case studies, product news, trainings, a constant flow of totally spontaneous networking – and the occasional party and sports activity.

Summarizing the overall Imagine experience, here are my top takeaways:

#1: Community is king

Magento Commerce comes in two editions – Community Edition and Enterprise Edition – but when people talk about The Community, they do not refer to these licensing flavors, but rather to the huge ecosystem of devoted partners, developers, agencies and merchants who live and breathe Magento.

Compared to the partner landscape around many commercial enterprise software products I’ve experienced, the one around Magento one is clearly different. This is due to the open source nature of Magento. Partners truly collaborate, share knowledge, code and best practices, and do their best to influence the product code base — all for the greater good.

Key takeaway #1: Community members work on making the cake bigger and better, rather than fiercely competing over the same parts of the cake. Everybody wins.

At an event like Imagine, the community spirit becomes really tangible. Developers, who during the year collaborate on Github and Twitter, get a chance to chat over beer, deliver official talks (like SnowDog’s Bartek @igloczek who was on stage about how Magento UX should really be done) – or go for a voluntary run with like-minded. This year, the community-enthusiast-driven #BigDamRun took nearly 200 orange Magento people for a sunny 10 km trail run in the hills by Hoover Dam, and ended up in CEO Mark Lavelle’s keynote deck.

20170404_092814[1] copy

#2: The product is booming

Since Magento hasn’t perhaps been in the hottest spotlight of analyst reports (watch out for the upcoming Gartner quadrant, though!), and has had a bit of a bumpy ride in terms of previous ownership, the occasional bystander could perhaps assume that Magento isn’t the most “happening” platform in the game. Big mistake!

Looking back, 2016 was very active: Magento acquired RJMetrics and turned it into Magento Business Intelligence, a cloud based all-in-one ETL & DW & visualization package for commerce analytics. Magento acquired BlueFoot CMS, which will be integrated in Magento 2.3 slated for H2 2017. Magento launched Digital Commerce Cloud for those who prefer to let the vendor take care of sizing, hosting and platform management. Magento launched its own Order Management module. And made Magento Commerce 2 a lot better. (FitForCommerce presented a semi-scientific study on total cost of implementation, indicating that implementation efforts have decreased as Magento 2 has matured!)

Key takeaway #2: Magento is investing heavily in expanding the platform in many different directions at the same time. This is a clear signal: Magento means serious business. With the help of the community, it’ll be hard to fail.

What next? Imagine 2017 had some launches to unveil: Magento Shipping simplifies integration with shipping providers. Magento Social takes your commerce to Facebook etc. The Magento B2B Module, scheduled for 2.2 in Q3, brings a host of serious B2B functionality to Magento Commerce. B2B features which previously required 3rd party extensions or custom coding will now be available out of the box, e.g:

  • personalized / shared catalogs
  • bulk ordering
  • recently purchased items & requisition lists
  • corporate accounts with hierarchical buyer organization modelling
  • fine-grained user permissions
  • self-registration of corporate accounts
  • RFQ and quoting process

Obviously, most bigger clients will need to integrate these B2B features with their backend system such as SAP ERP, but it’s great that the basic functionality is available. Partners like Bilot are available for the end-to-end integration magic.

As silence falls over Vegas…

Well, Las Vegas never goes silent, but Imagine does come to an end. Thousands head back home, reversing their #RoadToImagine trips, taking new insights and inspiration with them to their daily lives as merchants, developers, ISVs and Magento employees.

I can warmly recommend Magento Imagine to anyone interested in hooking up with community members, learning about the platform, or hearing real-life customer success stories related to Magento. Definitely happy that I joined this year. And I’m glad I brought my running shoes!