Profitability Analysis and CO with Simple Finance

customer-profitability-newport-board-group.png

This post will give the functionalities offered by Simple finance in COPA and management accounting

The long pending requirement from the majority of the customers in manufacturing industry is getting the COGS break up each head wise and it needs to post to a different GL account, in this way COPA is in sync with FI and it reduces the lot of reconciliation issues, the same issue has been addressed in simplified profitability analysis and it is part of simple finance.

In CO the main focus is to reduce the month end closing activities time and increase the system performance, In Sfin we have a separate set of transaction codes to perform this activity.

  • In simple finance, SAP is recommending for Account based COPA, account base in the default solution as the advantages of costing base has been incorporated in account base, as well enhancing the reconciliation aspect by having single document for Finance and COPA through universal journal entry, and improving the performance through the use of S/4 Hana database. No change in costing base approach.
  • The COEP, COSS and COSP tables are replaced with ACDOCA.
  • View tables are also available to reproduce the data in the old structure, for example V_COEP would allow seeing actual postings.
  • Assessments with in CO will update the COEP for CO documents and accounting documents with ACDOCA
  • Table ACDOCA would store both FI and CO posting in a unified document. As account based COPA posted is same as CO posting, the characteristic of account based COPA would also be part of ACDOCA

1.jpg

2.jpg

Configuration for splitting the cost of goods sold:

IMG: Spro > General Ledger Accounting (New) > Periodic Processing > Integration > Materials Management > Define Accounts for Splitting the Cost of Goods Sold

In Account Based COPA, there was no option to have a split of the Cost of Goods Sold into its components. This did not allow business to compare the component level costs of the inventory in terms of plan and actual, which can be basis of production re engineering. With Simple Finance, this option has been made available in account based COPA.

3.jpg

Configuration for additional quantities:

Spro > Controlling > General Controlling > Additional Quantities > Define Additional Quantity Fields

Additional quantity field can be configured. Badi FCO_COEP_QUANTITY has to be used.

4.jpg

In Simple Finance, the settings for Profitability Segment Characteristic is not supported any more, as each profitability segment contains all available characteristic values.

5.gif

With Simple Finance, Integrated Business Planning would be in general used for overall Planning purpose. However planning available in account based COPA would continue to exist, as it exists before. But with additional flexibility IBP would be the primary Planning tool going forward.

For Reporting User Interface tools like Lumira would be used. This gives additional flexibility of query based reporting, real time value updation etc. However Ke30 reports can still be created.

With the simple finance, the benefits like reconciliation with financials, system performance, cost of goods sold and IBP is as follows.

6.jpg

I hope this post will give some inputs on COPA in simple finance, Happy learning and welcome your valuable comments.

Visit – www.itchamps.com

@ITChamps_SAP – Pure Play SAP Consulting Firm

Advertisements

The Internet of Things and the era of “mass personalization”

mass_personalization_vax-1200x370

There was a time when cars were assessed largely by their horsepower or miles-per-gallon. These were the yardsticks of comparison in the analog age. But now, the question asked of car owners is more likely to become, “What OS version do you have?”

Increasingly, our vehicles are becoming traveling data centers, with Wi-Fi hotspots, operator-assisted navigation, sophisticated diagnostic computers, smartphone-based control apps and real-time tools like parking assist and lane change warnings. They are no longer dumb machines, nor are these innovations exclusive to the most expensive prestige automobiles.

In fact, such talents are not exclusive to cars at all, since refrigerators, washing machines, thermostats, and devices of all kinds have joined this intelligent party. Even the most intangible of objects, insurance, has started to move into this sphere.

These items play a role in the new era known collectively as the Internet of Things (IoT), in which devices large and small communicate with owners, operators, manufacturers and suppliers to pro-actively maintain their operational health, and more importantly, to create a direct channel for up-selling and ongoing revenue opportunities. No company is excluded from this transformation.

As General Electric CEO Jeff Immelt said at a 2014 Minds+Machines summit, “If you went to bed last night as an industrial company, you’re going to wake up this morning as a software and analytics company.”

IoT-based products are different from their analog predecessors. A permanent, high-speed Internet-based relationship guarantees that vendor and manufacturer need never be detached from the products they sold, or from the customers they sold them to. IoT has spawned new partner ecosystems, and these, in turn, demand new business models.

  • The Cactus, a compact car from European automaker Citroën, comes with a variation on the traditional lease, one based on “pay-per-use.” Citroën acknowledges “there is a portion of the population that is not willing to buy a car, but is willing to buy the use of a car.” This manufacturer joins the ranks of  car-as-a-service operations like ZipCar, but ups the ante – and improves the convenience factor – by allowing a customer to keep the vehicle permanently in the driveway as their own car (for a small base rate), and then pay the balance according to when and how it is used.
  • Inside the home, IoT washing machines can now schedule service calls and part replacements directly, but more importantly can offer the owner a subscription for detergent delivery, with a three-month free trial, as well as other up-sells and cross-sells.
  • New insurance companies are rolling out health plans whose premiums decrease with the amount of exercise performed by the member, as reported by a wearable fitness tracker.
  • Cars and trucks of all types are now busy sending their diagnostic data back to the dealership, to ensure prompt servicing as well as increased up-selling potential between dealer and owner.

These physical examples of IoT represent more than an innovation in product design. They also demand a change in the perception of the consumer. Whether in the B2B or B2C markets, there is now just an audience of one. This forces a shift from a mass production mindset to one of “mass personalization,” using the power of data to understand and serve each customer on their own terms.

Pricing, too, must change. Where once there was one pricing model, other options exist, for even the largest of big-ticket items. These can include:

  • “buy-once” products bundled with a subscription
  • free trial/freemiums, leading to recurring payments or subscriptions
  • pay-per-use pricing; and
  • entitlements (such as 2 free appointments with a personal trainer when purchasing exercise equipment)

The IoT revenue model ushers in an updated and more reliable style of loss-leader pricing, starting with a negative BOM (bill of materials), but then subsidizing the cost of the hardware with the revenues from bundled services. Loss-leader pricing in itself is not new; razor blade manufacturers, among others, have been using a negative BOM model for years. But now the analytics have become more accurate, and the customer relationship more individualized, consequently tipping the concept from a traditional loss leader to a more reliable positive cash flow formula.

Bringing these thoughts back to the notion of cars being assessed by their OS version, it is important to bear in mind that an operating system is, by its very nature, upgradeable, and reinforces the link between manufacturer and purchaser. In a way, the purchase is never complete, since upgrades bring along with them new features and additional support and purchase opportunities.

That is where the Internet of Things truly shines. It is a permanent pathway linking seller and buyer—a dynamic relationship of mutual benefit that grows and improves over time.

How to Rewire the Organization for the Internet of Things

feature_images_isbel_qa-730x300

Success in the IoT requires new levels of speed, agility, and flexibility, not just from the systems delivering IoT services but also from the people charged with making those services happen.

Hyperconnectivity, the concept synonymous with the Internet of Things (IoT), is the emerging face of IT in which applications, machine-based sensors, and high-speed networks merge to create constantly updated streams of data. Hyperconnectivity can enable new business processes and services and help companies make better day-to-day decisions. In a recent survey by the Economist Intelligence Unit, 6 of 10 CIOs said that not being able to adapt for hyperconnectivity is a “grave risk” to
their business.

IoT_Isbel_QA02IoT technologies are beginning to drive new competitive advantage by helping consumers manage their lives (Amazon Echo), save money (Ôasys water usage monitoring), and secure their homes (August Smart Lock). The IoT also has the potential to save lives. In healthcare, this means streaming data from patient monitoring devices to keep caregivers informed of critical indicators or preventing equipment failures in the ER. In manufacturing, the IoT helps drive down the cost of production through real-time alerts on the shop floor that indicate machine issues and automatically correct problems. That means lower costs for consumers.

Several experts from the IT world share their ideas on the challenges and opportunities in this rapidly expanding sector.

qa_qWhere are the most exciting and viable opportunities right now for companies looking into IoT strategies to drive their business?

Mike Kavis: The best use case is optimizing manufacturing by knowing immediately what machines or parts need maintenance, which can improve quality and achieve faster time to market. Agriculture is all over this as well. Farms are looking at how they can collect information about the environment to optimize yield. Even insurance companies are getting more information about their customers and delivering custom solutions. Pricing is related to risk, and in the past that has been linked to demographics. If you are a teenager, you are automatically deemed a higher risk, but now providers can tap into usage data on how the vehicle is being driven and give you a lower rate if you present a lower risk. That can be a competitive advantage.

Dinesh Sharma: Let me give you an example from mining. If you have sensored power tools and you have a full real-time view of your assets, you can position them in the appropriate places. Wearable technology lets you know where the people who might need these tools are, which then enables more efficient use of your assets. The mine is more efficient, which means reduced costs, and that ultimately results in a margin advantage over your competition. Over time, the competitive advantage will build and there will be more money to invest in further digital transformation capabilities. Meanwhile, other mining companies that aren’t investing in these technologies fall further behind.

qa_qWith the IoT, how should CIOs and other executives think and act differently?

Martha Heller: The points of connection between IT and the business should be as strategic and consultative as possible. For example, the folks from IT who work directly with R&D, marketing, and data scientists should be unencumbered with issues such as network reliability, help desk issues, and application support. Their job is to be a business leader and to focus on innovative ideas, not to worry for an instant about “Oh your e-mail isn’t working?” There’s also obviously the need for speed and agility. We’ve got to find a way to transform a business idea into something that the businessperson can touch and feel as quickly as possible.

Greg Kahn: Companies are realizing that they need to partner with others to move the IoT promise forward. It’s not feasible that one company can create an entire ecosystem on their own. After all, a consumer might own a Dell laptop, a Samsung TV, an Apple watch, a Nest device, an August Smart Lock, and a Whirlpool refrigerator.

It is highly unrealistic to think that consumers will exchange all of their electronic equipment and appliances for new “connected devices.” They are more likely to accept bridge solutions (such as what Amazon is offering with its Dash Replenishment Service and Echo) that supplement existing products. CIOs and other C-suite executives will need to embrace partnerships boldly and spend considerable time strategizing with like-minded individuals at other companies. They should also consider setting up internal venture arms or accelerators as a way to develop new solutions to challenges that the IoT will bring.

qa_qWhat is the emerging technology strategy for effectively enabling the IoT?

Kavis: IT organizations are still torn between DIY cloud and public cloud, yet with the IoT and the petabytes of data being produced, it changes the thinking. Is it really economical to build this on your own when you can get the storage for pennies in the cloud? The IoT also requires a different architecture that is highly distributed, can process high volumes of data, and has high availability to manage real-time data streaming.

On-premise systems aren’t really made for these challenges, whereas the public cloud is built for autoscaling. The hardest part is connecting all the sensors and securing them. Cloud providers, however, are bringing to market IoT platforms that connect the sensors to the cloud infrastructure, so developers can start creating business logic and applications on top of the data. Vendors are taking care of the IT plumbing of getting data into the systems and handling all that complexity so the CIO doesn’t need to be the expert.

Kahn: All organizations, regardless of whether they outsource data storage and analysis or keep it in house, need to be ready for the influx of information that’s going to be generated by IoT devices. It is an order of magnitude greater than what we see today. Those that can quickly leverage that data to improve operational efficiency, and consumer engagement will win.

Sharma: The future is going to be characterized by machine interactions with core business systems instead of by human interactions. Having a platform that understands what’s going on inside a store – the traffic near certain products together with point-of-sale data – means we can observe when there’s been a lot of traffic but the product’s just not selling. Or if we can see that certain products are selling well, we can feed that data directly into our supply chain. So without any human interaction, when we start to see changes in buying behavior we can update our predictive models. And if we see traffic increasing in another part of the store in a similar pattern we can refine the algorithm. We can automatically increase supply of the product that’s in the other part of the store. The concept of a core system that runs your process and workflow for your business but is hyperconnected will be essential in the future.

qa_qPrivacy and security are a few of the top concerns with hyperconnectivity. Are there any useful approaches yet?

IoT_Isbel_QA03Kavis: We have a lot less control over what is coming into companies from all these devices, which is creating many more openings for hackers to get inside an organization. There will be specialized security platforms and services to address this, and hardware companies are putting security on sensors in the field. The IoT offers great opportunities for security experts wanting to specialize in this area.

Kahn: The privacy and security issues are not going to be solved anytime soon. Firms will have to learn how to continually develop new defense mechanisms to thwart cyber threats. We’ve seen that play out in the United States. In the past two years, data breaches have occurred at both brick-and-mortar and online retailers. The brick-and-mortar retail industry responded with a new encryption device: the chip card payment reader. I believe it will become a cost of business going forward to continually create new encryption capabilities. I have two immediate suggestions for companies: (1) develop multifactor authentication to limit the threat of cyber attacks, and (2) put protocols in place whereby you can shut down portions of systems quickly if breaches do occur, thereby protecting as much data as possible.

Highest ROI in e-commerce? Email remarketing and retargeted ads

person-checking-email-smartphone-image-930x370

Digital marketers know they must measure and optimize all of their efforts, with the goal of increasing sales. They must also be able to prove a positive return on their investments. That said, digital marketers are constantly on the hunt for the latest technologies to help with both.

Shopping Cart Abandonment Emails Report Highest ROI

The highest ROI reported is from shopping cart abandonment emails. This shouldn’t be a surprise — 72 percent of site visitors that place items into an online shopping cart don’t make the purchase. Since they did almost purchase, cart abandoners are now your best prospects. And, a sequence of carefully timed emails will recover between 10-30 percent of them.

It’s these types of recovery rates that propel shopping cart abandonment emails to the top. They generate millions in incremental revenue for only a small effort and cost.

Retargeted Ads Complement Shopping Cart Abandonment Emails

The second most successful technique is retargeted advertising, a fantastic complement to shopping cart abandonment emails. Retargeted advertising works in a similar way, by nudging visitors to return to a website after they have left. And while retargeted advertising works across the entire funnel — from landing to purchase — the biggest opportunities lie where there is some level of intent to purchase, such as browsing category and product pages.

While the two techniques deliver a high ROI, they are definitely not the same. For example, brands using SeeWhy’s Conversion Manager to engage their shopping cart recovery emails average a 46 percent open rate and 15 percent click-through rate. Retargeted ads, by comparison, average a 0.3 percent click-through rate.

See the difference?

The real power comes when you combine the two techniques together — using retargeted advertising when no email address has been captured and email remarketing when it has.

Don’t “Set ‘Em and Forget ‘Em”

To achieve the highest possible ROI combining cart abandonment emails with retargeted advertising, you should plan to test and tune your campaigns. It’s dangerous to go live with your new campaign and then ‘set it and forget it.’ Testing and tuning your campaign can double or triple your revenues. SeeWhy tracks more than $1B in Gross Market Value ecommerce revenues annually and analyzes this data to understand what factors have the biggest impact on conversion.

A SeeWhy study of more than 650,000 individual ecommerce transactions last year concluded that the optimal time for remarketing is immediately following abandonment. Of those visitors that don’t buy, 72 percent will return and purchase within the first 12 hours.

So timing is one of the critical factors; waiting 24 hours or more means that you’re missing at least 3 out of 4 of your opportunities to drive conversions. For example, a shopping cart recovery email campaign sent by Brand A 24 hours after abandonment may be its top performing campaign. But this campaign delivers half the return of Brand B’s equivalent campaign which is real time.

Scores of new technologies and techniques will clamor for your attention, making bold claims about their ROI and conversion. But if they aren’t capable of combining shopping cart abandonment emails and retargeted ads, the two biggest ROI drivers in the industry, then they aren’t worth your time.

@JovieSylvia @ITChamps_SAP

Take a look at our website: www.itchamps.com

Our Digital Planet: Rise of The Digital Worker The New Breed of Worker

image05-730x300

British-Australian mining giant Rio Tinto has employed autonomous trucks, excavators and drills recently to create the first workerless iron ore mine in Western Australia. The drivers – if they can still be called that – work out of a remote operations centre hundreds of kilometres away, where data scientists mine data collected from the vehicle’s sensors. This dynamic, known as the ‘human and digital recombination’, is but a single step on the path to a changed workplace, as connectivity and automation drive the transition to digital on an unprecedented scale.

digital_planet_02_image1Real-time analysis, together with emerging digital technologies and intelligent digital processes, have upended the workplace as we know it; and businesses are today subject to a deep cultural shift in work organisation, culture and management mind set. The impact is a shift towards workers looking at available information as opposed to ‘explorative surgery’ measures when the damage is already done.

Human and digital recombination, cutting-edge decision making, realtime adaptation and experiment-driven design are pushing this transformation, not just in manufacturing but in every conceivable area of the workplace. And while the technology has done much to facilitate the transition to digital, the challenges are many.

Fat tags

Aside from Rio Tinto’s automated vehicles, other software-enabled, manufacturing- friendly marvels are around the corner, such as kilobyte-rich radio frequency identification (RFID) tags. Basically position finders at present, tomorrow’s tags will have so much storage capacity that they will act like transponders and actually tell people what to do.

As Siemens’ Markus Weinlander, Head of Product Management, predicted: “[RFID tags] can make a major contribution to the realisation of Industry 4.0 by acting as the eyes and ears of IT. For the first time, transponders will be able to carry additional information such as the production requirements together with their assembly plan. All of this will be readable at relatively large distances.”

These ‘fat tags’ will do more than boost automation. They will also make companies more nimble-footed and, say experts, allow small businesses to compete with the giants. According to Weinlander, the new wave of RFID rags will greatly facilitate customised products because they will contain all the essential information for small runs. “To remain competitive in today’s global market environment, many companies have to be able to produce in tiny batches without higher costs”, he said.

Other practical benefits are likely. For instance, maintenance and repair work will be made simpler, faster and more timely. As BCG Consulting points out, technicians will identify any problems with a machine from a stream of realtime data and then make repairs with the help of augmented-reality technology supplemented, if necessary, by remote guidance from off-site experts. In this way, downtime per machine will be reduced from one day to an hour or two.

digital_planet_02_image2

Digital people

In this brave new world of hyperconnectivity, the ‘digital worker’ – a data-driven individual skilled in converting information into revenue – will stand in the middle and direct traffic, as it were. As SAP put it in its D!gitalistmagazine, the digital worker will “create instant value from the vast array of real-time data.”

Instead of the traditional approach of gathering, processing, and moving data around while spending valuable time creating reports, digital workers will be forced to move towards predictive, scenario, and prognosis-based decision- making. SAP’s article goes on to explain: “The speed of information and data is driving such significant change in how and where we work that the digital worker is becoming a critical resource in decision-making, learning, productivity, and overall management of companies.”

HYPERCONNECTIVITY HAS LED US TO A NEW ERA, WHERE PETER DRUCKER’S “KNOWLEDGE WORKER” HAS COME TO AN END AND THE “DIGITAL WORKER” NOW NEEDS TO STEP UP AND CREATE INSTANT VALUE FROM THE VAST ARRAY OF REAL-TIME DATA

In organisations where data-savvy individuals may know more about what’s happening than the boss, the top-down hierarchy will be overturned. In short, everybody will be a leader in their own particular area of expertise. “The traditional management and organisational model is quickly getting outdated in the digital economy, and true leaders are changing their management approach to reflect this”, said SAP. Senior executives will have to be more visible and approachable for employees and customers alike – in short, both colleague and captain.

“[Managers] must juggle a distributed contingent workforce with digital workers who require real-time analysis, prognosis, and decision making. At the same time, they must develop the next generation of leaders who will actively take responsibility for innovation and engagement”, said SAP.

If done properly, this new collaborative workplace could reduce the complexity that bedevils most large organisations in an era of globalisation. According to the Economist Intelligence Unit, 55 percent of executives believe their organisational structure is ‘extremely’ or ‘very’ complex and 22 percent say they spend more than a quarter of their day managing complexity. More than three-quarters say they could boost productivity by at least 11 percent if they could cut complexity by half.

More jobs

But will the superconnected workplace destroy jobs? BCG Consulting thinks not. In a study of German manufacturing released in October, the think tank concluded that higher productivity actually equals higher employment at home. “As production becomes more capital intensive, the labour cost advantages of traditional low-cost locations will shrink, making it attractive for manufacturers to bring previously off-shored jobs back home”, the study predicted. “The adoption of Industry 4.0 will also allow manufacturers to create new jobs to meet the higher demand resulting from the growth of existing markets and the introduction of new products and services.”

Experts such as Ingo Ruhmann, Special Adviser on IT systems at Germany’s Federal Ministry of Education and Research, agree with this finding. “Complete automation is not realistic”, he told BCG Perspectives. “Technology will mainly increase productivity through physical and digital assistance systems, not the replacement of human labour.”

However, it will be a new kind of human labour. “The number of physically demanding or routine jobs will decrease while the number of jobs requiring flexible responses, problem solving, and customisation will increase”, Ruhmann predicts. For most employees, tomorrow’s workplace should be a lot more fun.

Identifying Performance Problems in ABAP Applications

SAP-ABAP-training

Analyze Statistics Records Using the Performance Monitor (Transaction STATS)

IT landscapes tend to become more complex over time as business needs evolve and new software solutions for meeting these needs emerge. This complexity can become a challenge for maintaining smooth-running business operations and for ensuring the performance and scalability of applications.

Performance means different things to different people. End users demand a reasonable response time when completing a task within a business process. IT administrators focus on achieving the required throughput while staying within their budget, and on ensuring scalability, so that a software’s resource consumption changes predictably with its load (the number of concurrent users or parallel jobs, or the number or size of processed business objects, for example). For management, optimal performance is about more productive employees and lower costs.

The overall objective is to reach the best compromise between these perspectives. This requires reliable application monitoring data, so that developers or operators can analyze an application’s response time and resource consumption, compare them with expectations, and identify potential problems. SAP customers running SAP NetWeaver Application Server (SAP NetWeaver AS) ABAP 7.4 or higher can access this data with a new, simple, user-friendly tool: the Performance Monitor (transaction STATS).

This article provides an introduction to the statistics records that contain this monitoring data for applications that run on the ABAP stack, and explains how you can use transaction STATS to analyze these records and gain the insights you need to identify applications’ performance issues.

What Are Statistics Records?

Statistics records are logs of activities performed in SAP NetWeaver AS ABAP. During the execution of any task (such as a dialog step, a background job, or an update task) by a work process in an ABAP instance, the SAP kernel automatically collects header information to identify the task, and captures various measurements, such as the task’s response time and total memory consumption. When the task ends, the gathered data is combined into a corresponding statistics record. These records are stored chronologically — initially in a memory buffer shared by all work processes of SAP NetWeaver AS ABAP. When the buffer is full, its content is flushed to a file in the application server’s file system. The collection of these statistics records is a technical feature of the ABAP runtime environment and requires no manual effort during the development or operation of an application.

The measurements in these records provide useful insights into the performance and resource consumption of the application whose execution triggered the records’ capture, including how the response times of the associated tasks are distributed over the involved components, such as the database, ABAP processing (CPU), remote function calls (RFCs), or GUI communication. A detailed analysis of this information helps developers or operators determine the next steps in the application’s performance assessment (such as the use of additional analysis tools for more targeted investigation), and identify potential optimization approaches (tuning SQL statements or optimizing ABAP coding, for example). In addition to performance monitoring, the statistics records can be used to assess the system’s health, to evaluate load tests, and to provide the basis for benchmarks and sizing calculations.

Productive SAP systems process thousands of tasks each second and create a corresponding number of statistics records, which contain valuable measurements for identifying performance issues. While existing tools provide access to the data in these records, transaction STATS offers an enhanced user interface that makes it much easier for you to select, display, and evaluate the data in statistics records, and devise a targeted plan for optimization.

Ensuring the Validity of Your Measurements

The value of a performance analysis depends on the quality of the underlying measurements. While the collection of data into the statistics records is performed autonomously by the SAP kernel, some preparatory actions are needed to ensure that the captured information accurately reflects the performance of the application.

The test scenario to be executed by the application must be set up carefully; otherwise, the scenario will not adequately represent the application’s behavior in production and will not yield the insights you need to identify the application’s performance problems. A set of test data that is representative of your productive data must be available to execute the scenario in a way that resembles everyday use. The test system you will use for the measurements must be configured and customized correctly — for example, the hardware sizing must be sufficient and the software parameterization must be appropriate, so that the system can handle the load. To obtain reliable data, you must also ensure that the test system is not under high load from concurrently running processes — for example, users should coordinate their test activities to make sure there are no negative interferences during the test run.

You must then execute the scenario a few times in the test system to fill the buffers and caches of all the involved components, such as the database cache, the application server’s table buffer, and the web browser cache. Otherwise, the measurements in the statistics records will not be reproducible, and will be impaired by one-off effects that load data into these buffers and caches. This will make it much more difficult to draw reliable conclusions — for example, buffer loads trigger requests to the database that are significantly slower than getting the data out of the buffer, and that increase the amount of transferred data. After these initial runs, you can execute the measurement run, during which the SAP kernel writes the statistics records that you will use for the analysis.

Displaying the Statistics Records

To display the statistics records that belong to the measurement run, call transaction STATS. Its start screen (see Figure 1) consists of four areas, where you specify criteria for the subset of statistics records you want to view and analyze.

 FIG 1
Figure 1 — On the STATS start screen, define filter conditions for the subset of statistics records you want to analyze, specify from where the records are retrieved, and select the layout of the data display

In the topmost area, you determine the Monitoring Interval. By default, it extends 10 minutes into the past and 1 minute into the future. Records written during this period of time are displayed if they fulfill the conditions specified in the other areas of the start screen. Adjust this interval based on the start and end times of the measurement run so that STATS shows as few unrelated records as possible.

In the Record Filter area, you define additional criteria that the records to be analyzed must meet — for example, client, user, or lower thresholds for measurement data, such as response time or memory consumption. Be as specific and restrictive as possible, so that only records relevant for your investigation will be displayed.

By default, statistics records are read from all application instances of the system. In the Configurationsection, you can change this to the local instance, or to any subset of instances within the current system. Restricting the statistics records retrieval to the instance (or instances) where the application was executed shortens the runtime of STATS. The Include Statistics Records from Memory option is selected by default, so that STATS will also process records that have not yet been flushed from the memory buffer into the file system.

Under Display Layout, select the resource you want to focus on and how the associated subset of key performance indicators (KPIs) — that is, the captured data — will be arranged in the tabular display of statistics records. The Main KPIs layouts provide an initial overview that contains the most important data and is a good starting point.

Analyzing Selected Statistics Records

Figure 2 shows the statistics record display based on the settings specified in the STATS start screen. The table lists the selected statistics records in chronological order and contains their main KPIs.

FIG 2arge

The header columns — shown with a blue background — uniquely link each record to the corresponding task that was executed by the work process. The data columns contain the KPIs that indicate the performance and resource consumption of the tasks. Measurements for times are given in milliseconds (ms) and memory consumption and data transfer are measured in kilobytes (KB).

The table of statistics records is displayed within an ALV grid control and inherits all functions of this well-known SAP GUI tool: You can sort or filter records; rearrange, include, or exclude columns; calculate totals and subtotals; or export the entire list. You can also switch to another display layout or modify the one you have chosen on the start screen. To access these and other standard functions, expand the toolbar by clicking on the Show Standard ALV Functions (Show Standard ALV Functions button) button.

The measurements most relevant for assessing performance and resource consumption are the task’sResponse Time and Total Memory Consumption. The Response Time measurement starts on the application instance when the request enters the dispatcher queue and ends when the response is returned. It does not include navigation or rendering times on the front end, or network times for data transfers between the front end and the back end. It is strictly server Response Time; the end-to-end response time experienced by the application’s user may be significantly longer. The most important contributors to serverResponse Time are Processing Time (the time it takes for the task’s ABAP statements to be handled in the work process) and DB Request Time (the time that elapses while database requests triggered by the application are processed). In most cases, Total Memory Consumption is identical to the Extended Memory Consumption, but Roll Memory, Paging Memory, or Heap Memory may also contribute to the total.

Since even the most basic statistics record contains too much data to include in a tabular display, STATS enables you to access all measurements — most notably the breakdowns of the total server Response Timeand the DB Request Time, and the individual contributions to Total Memory Consumption — of a certain record by double-clicking on any of its columns. This leads to an itemized view of the record’s measurements in a pop-up window, as shown in Figure 3. At the top, it identifies the particular statistics record via its header data. The up and down triangles (up navigation button and down navigation button) to the left of the header data support record-to-record navigation within this pop-up. The available technical data is grouped into categories, such as Time, DB, andMemory and Data. Use the tabs to navigate between categories containing data. Tabs for categories without data for the current statistics record are inactive and grayed out.

01-2016-PDMC-fig3-large

To assess the data captured in a statistics record, consider the purpose that the corresponding task serves. OLTP applications usually spend about one fourth of their server Response Time as DB Request Time and the remainder as Processing Time on the application server. For tasks that invoke synchronous RFCs or communication with SAP GUI controls on the front end, associated Roll Wait Time may also contribute significantly to server Response Time. For OLTP applications, the typical order of magnitude for Total Memory Consumption is 10,000 KB. Records that show significant upward deviations may indicate a performance problem in the application, and should be analyzed carefully using dedicated analysis tools such as transaction ST05 (Performance Trace). In comparison, OLAP applications usually create more load on the database (absolute as well as relative) and may consume more memory on the application server.

Saving Statistics

As mentioned earlier, productive systems create statistics records with a very high frequency, leading to a large volume of data that has to be stored in the application server’s file system. To limit the required storage space, the SAP kernel reorganizes statistics records that are older than the number of hours set by the profile parameter stat/max_files and aggregates them into a database table. After the reorganization, STATS can no longer display these records.

If you need to keep statistics — that is, a set of statistics records that match conditions specified on the STATS start screen — for a longer period of time for documentation reasons, reporting purposes, or before-after comparisons, you have two options:

  • Export the statistics to a binary file on your front-end PC
  • Save them into the statistics directory on the database

Both options are available via the corresponding Export Statistics to Local Front End button and Save Statistics to Database button buttons, respectively, on the STATS start screen (Figure 1) and in the tabular display of the statistics records (Figure 2).

To access and manage the statistics that were saved on the database, click on the Show Statistics Directory button button on the STATS start screen (Figure 1), which takes you to the statistics directory shown in Figure 4. In the two areas at the top of the screen, you specify conditions that the statistics must fulfill to be included in the list displayed in the lower part of the screen. Statistics are deleted automatically from the database four weeks after they have been saved. You can adjust this default Deleted On date so that the data is still available when you need it. Similarly, you can change the description, which you specified when the statistics were saved. Double-clicking on a row in the directory displays the corresponding set of statistics records, as shown in Figure 2. All capabilities described previously are available.

FIG $

Statistics that were exported to the front-end PC can be imported either into the STATS start screen, which presents the content in the tabular display shown in Figure 2, or into the statistics directory, which persists it into the database. In both cases, the import function is accessed by clicking on the Import Statistics from Local Front End button button. You can also import statistics into an SAP system that is different from the system where the statistics were exported. This enables the analysis of statistics originating from a system that is no longer accessible, or cross-system comparisons of two statistics.

Conclusion

To optimize the performance of applications and ensure their linear scalability, you need reliable data that indicates the software’s response times and resource consumption. Within the ABAP stack of SAP solutions, this data is contained in the statistics records that the SAP kernel captures automatically for every task handled by a work process. Using this information, you can identify critical steps in an application, understand which component is the bottleneck, and determine the best approach for optimization.

The Performance Monitor (transaction STATS) is a new tool available with SAP NetWeaver 7.4 for selecting, displaying, and analyzing statistics records. It helps developers and operators find the best balance between fast response times, large data throughput, high concurrency, and low hardware cost. The tool is easy and convenient to use, and employs a feature-rich UI framework so that you can focus on the data and its interpretation, and set a course for high-performing applications.

Good Advice on Giving Good Speeches

business-advice

Longtime readers of my blog know I find ideas for blogs everywhere, from psychology experiments to work events to the origin of words and phrases. Lately, however, books have become my primary source of inspiration. In fact, it’s not uncommon for me to have multiple blog ideas after reading a book.

That was the case when I read Seymour Schulich’s Get Smarter: Life and Business Lessons. Earlier this year, in How to Make Better Decisions I blogged about a simple, but practical, extension to the traditional two-column pro/con decision list. But there’s much more in the book that’s blog-worthy. For example, Schulich provides good advice on giving good speeches:

  • Be brief
  • Try to communicate one main idea
  • Create a surprise
  • Use humour
  • Slow it down
  • Use cue cards and look up often
  • Self-praise is no honour
  • Never speak before the main dinner course is served
  • Reuse good material
  • Use positive body language

Most of this advice is self-explanatory, except the self-praise line. Schulich means you should never introduce yourself; instead get someone to tell the audience why you’re important. That way they’re more likely to pay attention to you.

Considering the digital age we live in, Schulich is not really advocating we present using cue cards. Instead his goal is to ensure presenters don’t read speeches and get out from behind the podium. This forces us to give up our safety nets and increases the likelihood we connect with the audience.

While you can’t always control when you present, it’s important to recognize the most difficult slot is right before meals. No matter how good a presenter you are, remember the old adage:

Never get between people and their food.

Any presentation tips you want to add?