Application Integration Made Simple with SAP HANA Cloud Integration

HANA Integrations Services is built from scratch to assist customers to smoothly integrate with on-premise systems. The course also addresses typical integration challenges, an overview of the Integration Technology, how pre-packaged Integration Flows, Out of the Box solutions, supports multi-tenancy, HCI software updates, security, data isolation and license/packages available for customers/partners.

1.pngReference – Unit 1 Slide 6

Most import to understand how the End to End Web Tooling works and different phases during the integrations scenarios as also discussed. Unit 1 also covers some customer examples.

2.pngReference – Unit 1 Slide 7

Unit 2 – Introduction to Web UI

This unit would be the most important unit which helps to understand the Web UI and how it really helps to solve the integration scenarios. The course discusses

1. Discover

A pre-packaged content from Integration Catalogue, which can be copied to your workspace and make adjustments to your needs

3.pngReference – Unit 2 Slide 16

2. Design

This phase is used for configurations and customizing as per user requirement, it offers you Modeling Area, Pallets, properties etc.


Reference – Unit 2 Slide 17

3. Run & Monitor –

Once you have the configurations done, deploy the package and analyse the deployed package from Monitor section.


Reference – Unit 2 Slide 18

This unit has a demonstration scenario which really really helps to understand all the phases of the HCI.


Unit 3 – Configuring Pre-Packaged Integration Content

Now, it time to get you hands a little dirty. In this unit we shall start to understand on how to search for a pre-configured package from catalogue, how to copy them and rename it to avoid conflicts (this option has a minor change, as I did get option to rename it when I clicked on copy, however, option was given to rename when I started editing it), how to configure the source and destination integration flow, edit the connection details and deploy them. Final at the end of the course we also learn to Monitor.


Reference – Unit 3 Slide 27

Do not forget to look at the demo on this complete scenario.


Unit 4 – Creating Integration Processes from Scratch

I would say, Unit 4 and Unit 5 are the most important things of this Week1 course. This unit talks about the Integration Process from scratch and introduces you to “Apache Camel” and a book related to “Camel In Action” by Claus Ibsen and Jonathan Anstey and “Enterprise Integration Pattern” by Gregor Hohpe and Bobby Woolf.


Reference – Unit 4 Slide 41

The above image, explains how a Sender Component talks to a Receiver Components and how the messages are processed via integration channels and what are the process and underlying framework used. The unit then continues with a hands-on demo on explaining about creating a new Process Integration and testing it with an SFTP file transfer. You will also understand how to monitor and verify the successful data transfer.


I would recommend doing this exercise to get an understanding of Integration Services.


Unit 5 – Working with Data in SAP HANA Cloud Integration

This unit focuses on explaining the Data formats used and what are the fundamental properties of a Message, Exchange (a message’s container during routing) etc.


Reference – Unit 5 Slide 55


Reference – Unit 5 Slide 56

Once we understand the message and exchange formats, there is a simple scenario to help us to understand the message model, this is covered as a hands-on exercise.


Reference – Unit 5 Slide 57

Follow the Video and configure the communication channel and content modifier. Once done, you should be able to test it with an SOAPUI.

7 Surprising Innovations For The Future Of Computing


Moore’s Law posits that the number of transistors on a microprocessor — and therefore their computing power — will double every two years. It’s held true since Gordon Moore came up with it in 1965, but its imminent end has been predicted for years. As long ago as 2000, the MIT Technology Review raised a warning about the limits of how small and fast silicon technology can get.

The thing is, Moore’s Law isn’t really a law. It’s more of a self-fulfilling prophecy. Moore didn’t describe an immutable truth, like gravity or the conservation of momentum. He simply set our expectations, and lo, the chip makers delivered accordingly.

In fact, the industry keeps finding new ways to pack more power onto tinier chips. Unfortunately, they haven’t found ways to cut costs on the same exponential curve. As Fast Company reported in February 2016, the worldwide semiconductor industry is no longer planning to base its R&D plans for silicon chips around the notion of doubling their power every two years, because it simply can’t afford to keep up that pace in purchasing the incredibly complex manufacturing tools and processes necessary. Besides, current manufacturing technology may not be able to shrink silicon transistors much more than it already has. And in any event, transistors have become so tiny that they may no longer reliably follow the usual laws of physics — which raises questions about how much longer we’ll dare to use them in medical devices or nuclear plants.

So does that mean the era of exponential tech-driven change is about to come to a screeching halt?

Not at all.

Even if silicon chips are approaching their physical and economic limits, there are other ways to continue the exponential growth of computing performance, from new materials for chips to new ways to define computing itself. We’re already seeing technological advances that have nothing to do with transistor speed, like more clever software driven by deep learning and the ability to achieve greater computing power by leveraging cloud resources. And that’s only the tiniest hint of what’s coming next.

Here are a few of the emerging technologies that promise to keep computing performance rocketing ahead:

  • In-memory computing. Throughout computing history, the slowest part of processing has been getting the data from the hard disks where it’s stored to random access memory (RAM), where it can be used. A lot of processor power is wasted simply waiting for data to arrive. By contrast, in-memory computing puts massive amounts of data into RAM where it can be processed immediately. Combined with new database, analytics, and systems designs, it can dramatically improve both performance and overall costs.
  • Graphene-based microchips. Graphene — one molecule thick and more conductive than any other known material (see The Super Materials Revolution) — can be rolled up into tiny tubes or combined with other materials to move electrons faster, in less space, than even the smallest silicon transistor. This will extend Moore’s Law for microprocessors a few years longer.
  • Quantum computing. Even the most sophisticated conventional computer can only assign a one or a zero to each bit. Quantum computing, by contrast, uses quantum bits, or Qubits, which can be a zero, a one, both at once, or some point in between, all at the same time. (See this explainer video from the Verge for a surprisingly understandable overview.) Theoretically, a quantum computer will be able to solve highly complex problems, like analyzing genetic data or testing aircraft systems, millions of times faster than currently possible. Google researchers announced in 2015 that they had developed a new way for qubits to detect and protect against errors, but that’s as close as we’ve come so far.
  • Molecular electronics. Researchers at Sweden’s Lund University have used nanotechnology to build a “biocomputer” that can perform parallel calculations by moving multiple protein filaments simultaneously along nanoscopic artificial pathways. This biocomputer is faster than conventional electrical computers that operate sequentially, approximately 99 percent more energy-efficient, and cheaper than both conventional and quantum computers to produce and use. It’s also more likely to be commercialized soon than quantum computing is.
  • DNA data storage. Convert data to base 4 and you can encode it on synthetic DNA. Why would we want to do that? Simple: a little bit of DNA stores a whole lot of information. In fact, a group of Swiss researchers speculate that about a teaspoon of DNA could hold all the data humans have generated to date, from the first cave drawings to yesterday’s Facebook status updates. It currently takes a lot of time and money, but gene editing may be the future of big data: Futurism recently reported that Microsoft is investigating the use of synthetic DNA for secure long-term data storage and has been able to encode and recover 100 percent of its initial test data.
  • Neuromorphic computing. The goal of neuromorphic technology is to create a computer that’s like the human brain—able to process and learn from data as quickly as the data is generated. So far, we’ve developed chips that train and execute neural networks for deep learning, and that’s a step in the right direction. General Vision’s neuromorphic chip, for example, consists of 1,024 neurons — each one a 256-byte memory based on SRAM combined with 3,000 logic gates — all interconnected and working in parallel.
  • Passive Wi-fi. A team of computer scientists and electrical engineers at the University of Washington has developed a way to generate Wi-fi transmissions that use 10,000 times less power than the current battery-draining standard. While this isn’t technically an increase in computing power, it is an exponential increase in connectivity, which will enable other types of advances. Dubbed one of the 10 breakthrough technologies of 2016 by MIT Technology Review, Passive Wi-fi will not only save battery life, but enable a minimal-power Internet of Things, allow previously power-hungry devices to connect via Wi-fi for the first time, and potentially create entirely new modes of communication.

So while we may be approaching the limits of what silicon chips can do, technology itself is still accelerating. It’s unlikely to stop being the driving force in modern life. If anything, its influence will only increase as new computing technologies push robotics, artificial intelligence, virtual reality, nanotechnology, and other world-shaking advances past today’s accepted limits.

In short, exponential growth in computing may not be able to go on forever, but its end is still much farther in the future than we might think.

What Will Influencers Do Now?

275759_l_srgb_s_gl (1).jpg

What Will Influencers Do Now?

Rumblings were heard across social media recently when Bloomberg reported that the FTC will be stepping up enforcement of sponsored social media posts. According to Bloomberg, brands spend $255 million each month for sponsored, or influencer, posts.
The issue is disclosure. The FTC took Warner Brothers to task recently for the company’s YouTube campaign featuring paid endorsements of its video game “Middle Earth: Shadow of Mordor.” The influencer posts either did not indicate that they were paid for by Warner Brothers, or the information was visible only with a click-through. (If you’re interested on the FTC’s rules for endorsements, here’s the guide.)
The FTC says that advertisers should be responsible for ensuring that sponsorship is clear to viewers, which usually means adding the hashtags #ad, #sp, #sponsored, or #collab. But the FTC says it’s not okay to have a sponsorship hashtag crowded among 20 others at the end of a post, where it’s tough to pick out. Also, #sp and #spon aren’t clear enough, according to the agency.
Some say there’s a gray area with every new social media platform, and there’s no reason to identify a post as paid for. But the FTC is having none of it—the rules are the rules, it says, regardless of whether the content shows up on Snapchat, Twitter, YouTube, or any yet-to-be invented platform. As one influencer told Bloomberg in another article, she does it the other way around: If she doesn’t post that she hasn’t been paid for a post, then that means she probably has.
Close, but not quite?
To get an idea of the money involved, how it works, and why this is such a gray area, this Women’s Wear Daily piece gets into the nitty-gritty, and also the root of some of the confusion—it doesn’t fit tidily into any of the traditional concepts of advertising.
There are similar problems in the UK, where the rules are set by the Advertising Standards Agency (ASA). The problem is with the interpretation of content control and payment. The Contents and Markets Authority has recently been going after unidentified sponsored postings. (The ASA set standards; the CMA is the enforcer of consumer protections, among other things.)
A recent survey by SheSpeaks found that many influencers who accept payment for posts have been asked to keep the sponsorship a secret. SheSpeaks CEO Aliza Freud said that she believes this is due to brands not knowing the rules. But this desire for secrecy does beg the question of why—are these brands worried that paid postings will turn off their customers, or dilute the influencer’s influence?
But there’s also been a bit of a backlash from influencers themselves. The lack of transparency, the difficulty in measuring the impact of campaigns, and the sometimes fraught relationship and expectations on both sides has led to some friction. The Tumblr Who Pays Influencers is an attempt to shed a bit of light on how it works and who’s paying what for what. Some influencers also feel that their work isn’t valued and worry about alienating their fan base.
On the other side are the social media executives, like the one who did an anonymous Q&A with Digiday last spring. S/he thinks that the influencer trend is already on the wane. If that is true, the digital economy will need another outlet for brands and advertisers to hawk their products that leverages the personality and celebrity of influencers without alienating consumers with an overload of posts. Authenticity is a difficult thing to buy.