Вы находитесь на странице: 1из 19

openSAP

SAP BW/4HANA in a Nutshell


Week 1 Unit 1

00:00:12 Hello, and welcome to Unit 1: Value Proposition of SAP BW/4HANA. In this unit, we will
introduce SAP BW/4HANA, which is SAP's next-generation business warehouse,
00:00:24 we will discuss the similarities as well as the differences between BW/4HANA and BW
powered by HANA, and we will have a closer look at the design principles of BW/4HANA.
00:00:35 What is BW/4HANA? BW/4HANA is the next generation of our data warehouse application
which was introduced early in September 2016.
00:00:49 As you might know, we are coming from a long journey with BW. We started in 2012, when we
did some performance optimization, then we moved over to simplification and virtualization.

00:01:03 In SAP BW 7.5 powered by SAP HANA, we did a deeper integration between the HANA
platform, and we introduced our first Big Data scenarios here.
00:01:16 Now the question is: How do we move from SAP BW powered by SAP HANA to SAP
BW/4HANA? Therefore, we introduced the SAP BW/4HANA Starter Add-On,
00:01:29 which uses transfer tools to make the system ready for SAP BW/4HANA. It requires SAP BW
7.5, SP 4 or higher.
00:01:43 And some of you might actually remember this slide in a slightly different version, where the
BW/4HANA Starter Add-On was called SAP BW Edition for SAP HANA.
00:01:53 But basically, from a functional perspective it's the same, and as Gordon said, it's basically the
functionality which allows you and helps you to move from BW powered by HANA to
BW/4HANA.
00:02:04 So as you saw in this slide, BW/4HANA and BW powered by HANA of course have a lot of
things in common. But there are also differences, so let's have a closer look at this.
00:02:14 What are the similarities between BW/4HANA and BW powered by HANA? First of all,
BW/4HANA is a logical continuation of everything we did with BW and BW powered by HANA.

00:02:27 For example, if you look at the picture in the lower left-hand corner, you see that all the
fundamental principles of
00:02:34 all the aspects which BW covers, the way it integrates all these aspects in one integrated
application all this continues to hold. So you will feel very familiar with the way you integrate
data, the way you transform data, the way you define queries.
00:02:49 That's basically all taken over from the BW world and the BW powered by HANA world. It's
very much the same and it's a continuation in that sense.
00:02:58 But as I said, there are also some fundamental differences. Exactly.
00:03:04 However, SAP BW/4HANA is a totally new product and it's not part of SAP NetWeaver,
therefore we are decoupled from any NetWeaver release cycles.
00:03:17 The next point is that we introduced a new innovation code line which runs only on the HANA
database, so we don't need any compromises here,
00:03:29 and the logical consequence here is that we will support HANA-optimized objects only. Maybe
one remark concerning the independence of NetWeaver.
00:03:41 Certain NetWeaver components will still be part of the BW/4HANA shipment of course, so
when it comes to application lifecycle management, for example,
00:03:48 transports of your BW models from development into test and production, this will be handled
in the same way as you know from the past based on the NetWeaver infrastructure for that.

1
00:04:01 And similar when it comes to application lifecycle management in the sense of patching your
BW/4HANA system, installing Notes, installing support packages.
00:04:11 All this you will be very familiar with because that's the part which we take over from
NetWeaver. But certain NetWeaver components will not be part of the shipment any more
only the ones which we really rely on.
00:04:25 And conversely, NetWeaver shipments will not contain BW/4HANA any more. It's really a
separate product, independent of NetWeaver.
00:04:34 So let's come to the four design principles of BW/4HANA. The first one is "simplicity".
00:04:41 What is inside simplicity? Inside simplicity we have the modeling objects.
00:04:49 We reduce the number of modeling objects dramatically. So in BW/4HANA we will only have
four objects.
00:04:59 So we are coming from ten objects in the past, and now we have four SAP BW/4HANA-
optimized objects, which are two objects for persistency, which are the advanced DataStore
object and the InfoCube.
00:05:13 We will not have the PSA table any more and so on and so forth, and we will have two objects
for persistency, which are the Open ODS view and the CompositeProvider.
00:05:22 And the most important thing here is you can leverage all your requirements with exactly these
four objects. Yes, let's look at architecture aspects.
00:05:34 The main points about the paradigm changes in architecture are we need less persistent
layers, we can build leaner and more flexible data warehouse architectures and we
dramatically reduce the development efforts.
00:05:46 So if you look at the small picture on the right- hand side, you'll see what that actually means.
Less persistent layers means that you don't have any mandatory layers, for example,
00:05:56 just for performance reasons, which you always had to fill, otherwise a query wouldn't perform
well. Basically, queries can be run on any layer.
00:06:04 As soon as data is in a BW container or, in some cases, even if BW only reaches out to this
data via federation, it's directly possible without any additional optimizations to put a query on
top.
00:06:17 So this is a very important thing. And that basically means, as I said, that every layer can be
used for query.
00:06:25 Now with the virtualization capabilities which we have, for example, in the CompositeProvider,
we can also combine data from various layers independently,
00:06:34 so we don't have to bring data into one common layer in order to combine it, but we can
basically leverage data which comes almost straight from the source with data which is more
harmonized
00:06:46 if it makes sense from a conceptual or logical perspective. And of course all this, as I said,
goes along with reduced development efforts
00:06:56 because the less layers you have, the easier it is to build new scenarios and the easier it is to
adapt existing scenarios.
00:07:03 Okay, let's move to operations. As you might know, we had quite strong data lifecycle
management in the past as well.
00:07:12 Here we introduced hot, warm, and cold data concepts. We will follow those principles with
SAP BW/4HANA as well.
00:07:23 But the integration will be tighter and we will introduce more smarter concepts here.
Furthermore, we introduced some cloud deployments here as well.
00:07:34 So here we have three different options: HANA Enterprise Cloud, Amazon Web Services, and
Microsoft Azure. So here it's quite easy to deploy your SAP BW/4HANA,
00:07:48 quite easy and quite fast to make a system ready. So let's look at the next design principle,
which is "openness".
00:07:56 Openness comes in two aspects: It's always about opening BW for other consumers, but also
about bringing data from other sources into BW. Yes, I'll just say a few words on openness

00:08:11 from the perspective that BW can expose views to the HANA native world, which is what we
mean by automatically generated HANA views.
00:08:23 It's possible for most of the BW/4HANA managed objects. In the past, we supported all the
BW/4HANA managed objects.
00:08:32 So it's quite easy to automatically generate a "calculation view" out of the BW object, and then
you can build your own logic on top of that calculation view, you can stack the calculation view,

00:08:45 and it's quite important here to note that you can then access the data via each SQL-based
front- end tool. Via SQL you can access each of these calculation views.
00:08:57 And it's also important to note here that you have the possibility to automatically generate
HANA analytical privileges out of BW/4HANA authorization properties.
00:09:12 So we have an automatic pushdown from the BW authorizations, or we will transfer the data,
or translate the BW authorizations into HANA analysis properties.
00:09:25 So this is quite important. Right, it's very important and we get a lot of questions concerning
this,
00:09:30 so that's why we stress this again and again. Basically, whenever we get a chance, we tell you
that this concept also includes all the authorizations and security.
00:09:39 So let's look at the other side about bringing data into BW. Here we basically have two pillars,
we have the ODP source system for all ABAP-based SAP applications.
00:09:53 So that basically means two things: first of all, you've known the ODP source systems for a
while, I think they started to show up with BW 7.3,
00:10:02 and we've continued and expanded this functionality in the past, over the last few releases.
Now it's going to be really the only way to bring data into BW/4HANA.
00:10:11 So the old SAPI world will not exist any longer. If you want to attach a source system to a
BW/4HANA instance,
00:10:23 say an ECC system or an S/4HANA system, this will always go via the ODP source system.
For non-SAP data data from any relational database, but also data from more exotic sources
like Twitter, Facebook, whatever,
00:10:37 maybe even semi-structured stuff for all this, we use HANA platform capabilities,
00:10:44 namely HANA Enterprise Information Management, or to be more precise, HANA Smart Data
Integration. This is a concept which I think we also discussed a little bit in the BW 7.5 class
here in openSAP,
00:11:00 which allows us to leverage adapters from the HANA platform and kind of orchestrate the
whole thing from the BW environment. So you can schedule the load, start loads, maybe even
switch between a scheduled load or federated access and a real-time replication.
00:11:18 All these different modes are possible depending on the source. And everything is handled
from the BW side, but the functionality is really executed and part of the HANA platform.
00:11:28 So it's again one of these platform integration aspects here as well. Now let's move over to the
user interfaces,
00:11:36 and with a basically brand-new back end which tries to cut off a lot of things which we did in
the but which we don't consider strategic or necessary any more.
00:11:48 We basically made the same switch on the front-end side as well. And that's a very important
message: The BEx front ends will not be supported in BW/4HANA any more.
00:11:58 So we really move away from BEx Analyzer, for example, or from BEx Web into the direction
of the BusinessObjects front-end suite. So that basically means that business users who run
their queries on BW/4HANA will basically do this via BusinessObjects Cloud,
00:12:19 which is our new cloud-based offering for dashboarding and all kinds of reporting, and also a
spreadsheet type of reporting, or if it's an on-premise version, then it will be the Design Studio
suite or BusinessObjects Analysis.
00:12:34 So that's really a move away from the BEx past to the BusinessObjects world here. One thing
which I would like to mention because it generates a lot of confusion:
00:12:45 It does not mean that the BEx Query will go away. So the query functionality which we in the
past typically used to call BEx Query,
00:12:53 we've now tried to change the wording a little bit and talk about the BW Query. But from a
functional perspective, this still continues to hold,
00:13:01 so all the investment which you did on the query layer, this will survive the move from BW
powered by HANA to BW/4HANA. And we did exactly the same for the user interfaces for
modeling and administration.
00:13:17 So in simple terms, for modeling we have the Eclipse-based design interface, and for
administration we will have UI5 applications here.
00:13:29 As you can see on the right side, we introduced the "data flow modeler" for the Eclipse-based
modeling side. Here it's quite easy, fast, and agile to start modeling, to administrate the data a
bit,
00:13:47 to start loadings, to bring data together, and so on and so forth. And on the other side, we
introduced some new UI5 tools which can be used for, let's say,
00:14:01 administration, process chain administration, and so on and so forth, for administrators and
consultants. Especially for the data flow modeler, which is the main new part of the BW
modeling tools in Eclipse
00:14:14 which comes with the first shipment of BW/4HANA, we will show a quite detailed demo in one
of the following units. Exactly.
00:14:22 So let's come to the performance topic. On the performance topic, we will go our way and we
will push down most of the computations from the OLAP side into the HANA database,
00:14:36 as you can see on the pictures. So more and more OLAP functionalities are pushed down to
the HANA engine.
00:14:45 The same for planning and for data management here. Yes, but it's not only about pushing
functionality which used to reside in the BW application down to the HANA platform.
00:14:55 It's also about leveraging high-performance HANA engines inside BW. So, for example,
integrating data which you manipulated or logically enriched in a HANA model into BW,
00:15:10 that's of course possible, and we've already seen that in BW powered by HANA. The same
with data integration and data transformation.
00:15:16 We also saw this as one of the pillars of data integration in "openness" with HANA smart data
integration basically. And another important point is that we also want to leverage all kinds of
libraries, functionalities, and algorithms which the HANA platform brings,
00:15:34 typically via the HANA analysis process in BW which can integrate all this kind of stuff into a
BW-managed process, and basically execute such functionality with data inside BW and
writing it into another BW container.
00:15:51 So let's summarize what we've learned in this unit. BW/4HANA is really a new product.
00:15:57 It's not a legal successor of BW, it's not a legal successor of BW on HANA or powered by
HANA it's really a new product. But, and I think we've stressed this a couple of times and
you've seen many similarities and
00:16:10 many things you've known from BW powered by HANA already, it really is a logical successor,
it's really the next logical step of BW powered by HANA.
00:16:18 But now we basically do a certain cut, we remove certain functionality which we don't need any
more, and really want to go the last mile in the sense of simplification, for example,
00:16:30 to really get rid of all the old heritage which we don't need any more. And SAP BW/4HANA
runs on HANA only, which means no compromises against other databases.
00:16:42 So it's fully optimized for our platform, for the HANA platform, and it runs on HANA only. We
also discussed a little bit that it enables you to build flexible data warehouse architectures.
00:16:56 Remember: less layers, less persistency, more virtualization, more combination across layers.
All these ideas which we started with BW powered by HANA we will continue and further
elaborate on in BW/4HANA.
00:17:11 And to become more agile and much faster than in the past, we simplified our way of modeling
a data warehouse, and therefore we introduced the "data flow modeler"
00:17:23 which gives you the opportunity to build your data flows much more easily and much more
agilely than in the past, and the same for administration, so we introduced a couple of
administration tools here as well.
00:17:35 Yes, I guess that's it for today. Don't forget to do your self-test.
Week 1Unit 2

00:00:12 Hello and welcome to Unit 2 "Functional Overview". In this unit, we will introduce the two
main functionalities which are new in the first release of BW/4HANA.
00:00:24 And that's the new data flow modeling tool which is part of the Eclipse environment in the area
of modern UI, and in the area of simplicity we have a new concept for multi-temperature data
management,
00:00:37 basically about how to deal with hot and warm data in a more flexible and more enterprise-
ready way than we had in the past. So what is the data flow modeling tool all about?
00:00:52 With the first release of BW/4HANA, we will introduce the graphical data flow modeler, which
looks like a whiteboard. You can see it in the picture on the slide.
00:01:04 On the right-hand side you can use all the BW object types, and then you can easily use via
"drag&drop" the BW objects like CompositeProvider, Open ODS views,
00:01:14 DataStore objects, InfoObjects, and use it for modeling. So it's our central entry point for all
modeling tasks.
00:01:25 It's intuitive, you can use it, as I said, by dragging and dropping the objects, bringing it to the
data flow modeling tool,
00:01:33 and then enhancing the data flow. Model new or explore existing data flows, jump into the
maintenance tasks.
00:01:46 You can easily create transformations between objects. You can easily create data transfer
processes.
00:01:53 You can easily execute the created DTPs, and so on and so forth. It's also possible for you to
explore existing data flows here as well.
00:02:06 So as you can see, everything is possible in a very agile way with this data flow modeling tool.
And we'll show this in a demo later on, of course.
00:02:15 So what about the dynamic data lifecycle management? Well, we had some problems with this
in the past, and therefore we thought about a new concept,
00:02:25 which is basically based on scale-out architecture. So in a scale-out landscape, you can now
dedicate certain service blades to hot data and others to warm data.
00:02:37 And the main point about this is that those blades which are used for warm data cannot only
be loaded up to 50% of their RAM capacity, but either to 100% or even 200%.
00:02:51 So that basically means if you have a scale-out landscape with two terabyte nodes, the hot
nodes will of course only contain up to one terabyte of data because the rest will be used for
processing,
00:03:03 and has to be reserved for that. For the extension nodes which keep the warm data, you can
load them with either up to two terabytes if you go for the 100% option,
00:03:14 or with up to four terabytes if you go for the 200% capacity. And as the extension node
concept is part of the complete HANA landscape,
00:03:28 it's possible that we now support all the HANA features for operations, updates, and data
management, like backup and recovery, point in time backups, high availability, disaster
recovery.
00:03:42 Now with that concept of the extension nodes, this warm data management becomes data
center ready. Right, and it also allows us to add more flexibility to the whole concept of
dynamic tiering
00:03:58 because unlike in the past where we could only determine this, first of all, for certain object
types, and then only for complete objects, so an object had to be either completely in hot store
or in warm store,
00:04:09 for advanced DataStore objects in BW/4HANA, we can now actually decide this on partition
level. So if you partition an advanced DataStore object, you can really decide on partition level
whether certain data,
00:04:21 maybe older data, should only be warm. Or if it's more recent data, it should be in hot memory.
00:04:26 So even a single object can be distributed over the landscape, in a very flexible way. So I
guess we're at the point of the system demo.
00:04:35 We'll start with a demo of the data flow modeler. We'll show you how to graphically build such
a new data flow.
00:04:42 And then we'll quickly show you in the UI how you can actually change the location of a
partition of an advanced DataStore object from hot to warm or vice versa.
00:04:55 So let's switch into the Eclipse modeling tools. So for those of you who are familiar with BW
powered by HANA,
00:05:05 this is of course something you've at least roughly seen before. It's again the Eclipse
environment and the BW modeling tools in there.
00:05:11 But as we said, this is a new version which leverages functionality which only BW/4HANA in
the back end can provide. So let's see how we can actually build such a new data flow.
00:05:22 So what we do is say we want to create a new data flow. And the data flow gets a name.
00:05:28 So let's say that's "ZOPENSAP", and it's our "openSAP data flow".
00:05:39 And then we are probably pretty much ready to start. So as Gordon said, it pretty much follows
a drag and drop paradigm.
00:05:47 So if we for example want to start with a data source, we can either create a new data source
by dragging the "DataSource" icon from here.
00:05:56 Or if we have an existing data source, we can also select it here from the left-hand side and
bring it into the data flow. So if you now scroll down, you see that actually the source system is
already connected.
00:06:11 So the data flow modeler also knows the dependencies between the objects and can display
those. And now we can start building on top of this.
00:06:20 So, for example, if you want to build a DataStore object which stores the data of this data
source, you would drag and drop the DataStore object icon here,
00:06:33 you would assign a name to the DSO. Let's say we call it "SOHDRCUR", that's the current
data maybe.
00:06:47 And since we wanted to store the data of this data source, we just draw a connecting line
between the data source and the advanced DSO.
00:06:56 And what happens if we jump into the maintenance of the DSO now is that it will take the data
source as a template. So the structure of the ADSO will now be derived from the structure of
the data source directly by just connecting things with this line.
00:07:11 And again we could call this, say, "current Sales Orders". We will later on make this model a
little bit more complex and enrich it with historic sales orders.
00:07:22 That's why I'm always talking about current sales orders here. So that's "current Sales Orders".

00:07:27 And we can basically say "Finish" here. And now we jump from the data flow model into the
modeling UI, or into the editor of this individual object,
00:07:36 so of this advanced DataStore object. And here we can maintain all the details which are
required.
00:07:43 So, for example, we will definitely need to specify a key field if we want to use this as a
standard advanced DataStore object. And I think then we can start with the activation.
00:07:57 Now what you see here is again one of the things which we very frequently have with non-
SAP data, with external data. We have data types which are a bit exotic.
00:08:09 So in order to make this object easier to consume from a BW side, it's always a good idea to
put an "Open ODS view" on top of it which does some data type conversions,
00:08:21 and makes your life a little bit easier. So we would create an Open ODS view just in the same
way again by drag and drop.
00:08:27 "sohdercur_V" "V" for "view". And again we can use the connecting line feature here to use
the DataStore object as a template.
00:08:44 We would again say that's the "current Sales Orders". Oops...
00:08:52 And we can pretty much finish this wizard here as well. Now let's see if we have to adjust
something.
00:08:59 Okay. The "Amount" field is a key figure here, of course.
00:09:03 And I guess then we can activate this. And very quickly and easily we've built well, not a very
complex but still a few layers of a data flow.
00:09:12 Now let's switch over and show you how an existing object can be integrated in this data flow,
and how you can bring these two things together.
00:09:19 Exactly, so now I will expand the data model. So I will use an existing object and I will insert
the existing object into the existing data flow.
00:09:31 This is quite easy as well, so by dragging and dropping.
00:09:37 - Maybe let's put it on the same level here as the other one. - Yeah, exactly. But also you have
a lot of flexibility to design the layout as you prefer.
00:09:45 So here you can see now we have an advanced DataStore object for historical data. I will
expand it.
00:09:53 Right, so let's see where the data comes from... if we can easily see... And you see it's a
different data source, but it comes from the same source system.
00:10:00 So all these relations and all these dependencies between the objects are now displayed in a
very nice way. Exactly.
00:10:07 Now we can build on top of this. An on top of this, we will of course build a CompositeProvider
to make the data ready for reporting.
00:10:13 Therefore I choose the "CompositeProvider" and bring it in via drag and drop... Give them a
name...
00:10:27 So then I can easily connect... the first object and the second object as well. And then jump
in... enter a description...
00:10:45 Let's just say "SalesOrders" to make this quick. And here you can see the two objects
underneath.
00:10:55 Like the current sales orders on an Open ODS view level and the "Sales Order Snapshot" on
the advanced DSO
00:11:03 are automatically added as a union view, you can see that. I will finish that.
00:11:09 And it's very nice to see that these connections are both used to kind of reverse engineer what
had been built in the system. But they can also be used to fill wizards, like in the context of
these CompositeProviders, with additional information.
00:11:25 So by drawing these two lines from the Open ODS view and from the advanced DataStore
object into the CompositeProvider, the wizard was already filled with these two objects as
union partners in the CompositeProvider.
00:11:43 Exactly. Now I will do the mapping between the objects.
00:11:48 So this is quite easy via drag and drop. Again, that one here, and the other one from the Open
ODS view...
00:11:59 You can see the mapping is automatically done. I guess we can activate this and we're pretty
much finished, right?
00:12:07 So now we activate the object. Now the object is ready for reporting.
00:12:11 Right, so again, jumping back, you see that it's very easy and very nice to build and analyze
these data flows in a graphic way. And of course from each of these objects you can now jump
into the individual maintenance, make adjustments,
00:12:33 change the objects we only mentioned on the slides that you can also define transformations
and DTPs from there. You have to admit if you do it, this is still not part of the Eclipse modeling
tools,
00:12:47 but this would actually at least as of now jump back into the SAP GUI. But of course still we
have the integration of going back and forth.
00:12:57 And on the other hand, we will later see in the roadmap of BW/4HANA that this is one of the
next objects which is going to come. So you see that we pretty much now have a new central
entry point for all the modeling things which are relevant for BW.
00:13:12 You start a new data flow by basically starting with a blank sheet and starting to drag and drop
existing objects or building new objects.
00:13:21 You can also reverse engineer whole data flows, for example if you are starting with a new
system, and you import data flows from your old BW powered by HANA system.
00:13:30 Then you would just drag in one of the objects as we did here with the one on the left-hand
side. And then you can analyze to the bottom what is happening there.
00:13:39 So basically it can be useful for various things. All right, so that's it for the data flow modeler.
00:13:48 Now let's have a look at the multi-data temperature management. And therefore I will open an
advanced DataStore object.
00:13:58 So here we have one. You can see it's an advanced DataStore object, and now we have
different options here on the storage options.
00:14:09 It's possible to use dynamic tiering here. The dynamic tiering here is based on the extension
nodes.
00:14:15 I can flag that on and then I have two options here. I can say the dynamic tiering is valid for the
complete advanced DataStore object.
00:14:25 When you do that, then the complete DataStore object will be stored on the extension node.
On the other hand, it is possible to use it partition-wise.
00:14:39 I can say "Okay, I have different partitions, and a couple, or all, or some of the partitions
should be stored on the extension nodes and therefore you have to flag that option per
partition,
00:14:52 and then jump to the settings. And in the settings here, now it's possible to decide which
partition should be stored on which location.
00:15:04 So therefore open that one and then... - You have to click harder. - Yeah.
00:15:12 Then go to "Warm". And then you can decide that partition 1 should be moved to "Warm", to
the extension node.
00:15:19 And partitions 2 and 3 should be stored on the hot data. Now maybe let's explain a little bit
about what's going on here what happens to the data of the object?
00:15:32 So if the object is empty, then this basically means that the corresponding tables will be
created, or the corresponding partitions will be created on the respective blades.
00:15:42 So one partition will be created on the extension nodes, hot ones will be created in the regular
nodes. If the object contains data already, then the data is not moved automatically.
00:15:54 But basically the new location is stored on the HANA side. And whenever, for example, a tool
like Data Distribution Optimizer runs,
00:16:05 it will make the necessary adjustments to the data and move the data from "Hot" to "Warm" or
vice versa. So here it's just a matter of modeling or determining where the data should reside.
00:16:16 And as soon as you have data in your object already, you have to in addition run Data
Distribution Optimizer or do a landscape reorg.
00:16:25 Yeah, I guess that's it for the demo. Let's switch back to the slides.
00:16:29 So what are the key takeaways from this unit? First, SAP BW/4HANA comes with a completely
new data flow environment.
00:16:39 We call it the data flow modeler, which gives you the possibility to easily and agilely create
completely new data flows, enhance data flows, maintain data flows.
00:16:50 All that stuff is easily possible here. And the second one is SAP BW/4HANA simplifies
dramatically the data management of large data volumes
00:17:00 and comes with a flexible, dynamic tiering concept based on extension nodes. This is the first
time in the history of BW that dynamic tiering is flexible here.
00:17:13 And of course I think there is much more on the roadmap. Right, and we have a dedicated unit
for that of course as well.
00:17:23 Now before you look to the roadmap slides and the roadmap unit, don't forget to do your self-
test.
Week 1 Unit 3

00:00:12 Hello, and welcome to Unit 3: Migration to SAP BW/4HANA. In this unit, we will talk about the
different paths for how you can basically get to BW/4HANA,
00:00:21 we will show, discuss, and also demo the tool support which is currently provided by the
BW/4HANA Starter Add-On, and we will give you an outlook on the future conversion support,
so additional tools which will
00:00:33 help you to get on your journey to BW/4HANA even more easily. So now the question is: How
do we get to SAP BW/4HANA?
00:00:43 Of course, we have different paths here. From a long-term point of view, we definitely
recommend that every customer goes to SAP BW/4HANA.
00:00:53 But nowadays, when you start totally new what we call a "greenfield" approach then you
have two options at the moment. You can start directly with SAP BW/4HANA, but please be
aware that we currently have some small limitations here.
00:01:11 The first limitation is planning IP and BPC are currently not supported, but we will support a
planning application by the end of 2017.
00:01:23 The other thing is that besides the planning add- on, no other add-ons are supported at the
moment, but we will close this gap you will see that in our roadmap presentation as well.
00:01:35 Another option you have if you are not satisfied with the limitations is to go to SAP BW 7.5
powered by SAP HANA plus the SAP BW/4HANA Starter Add-On to protect your system I
would say.
00:01:52 Right, basically this allows you only to use the new objects and thereby you're basically as
close to BW/4HANA as you can be in a BW powered by HANA context.
00:02:02 Now what about customers who are not even running their BW system on HANA yet or are on
BW powered by HANA as of now? Well, of course, we recommend that everybody moves as
soon as possible to BW 7.5 powered by SAP HANA.
00:02:18 So if you're not on HANA at all, we definitely recommend that you take this path and prepare
the system and also prepare yourself, get some expertise on HANA, and also have the
possibility to use the capabilities which you have in BW 7.5 powered by HANA
00:02:34 to build leaner and more flexible architectures. A lot of the stuff which we discussed is possible
there already.
00:02:41 And there are basically two reasons why we recommend this: First of all, it brings you on the
journey which we recommend in the long term anyway, it brings you on the journey to
BW/4HANA,
00:02:51 and secondly, you can also directly benefit from this, right? I mean, if you have... The
advantages of leaner architectures are something which basically provide direct and
immediate value
00:03:03 to your business users and also to the administration of your system. The smaller and leaner
your system gets, the easier it is to handle.
00:03:09 So it's definitely the preferred and recommended approach to bring a BW system to BW 7.5
powered by HANA in any case. Now let's talk a little bit about the prerequisites,
00:03:23 what the step from BW 7.5 powered by HANA to BW/4HANA really means, and what the
considerations are which we have to take into account.
00:03:35 Yes, the prerequisite here is that you should only use SAP HANA-optimized objects. You can
only use them in the BW/4HANA context of course.
00:03:45 Exactly, but therefore we have this SAP BW/4HANA Starter Add-On, and this Starter Add-On
provides you with tools for exactly that migration, for data flow migrations and data object
migrations from,
00:03:55 let's say, classic BW objects to HANA- optimized objects. From a source system point of view,
only the usage of ODP, HANA, and FILE systems is currently supported.
00:04:11 3.x data flows are not supported any more. The BEx front-end tools are not supported any
more, but of course BEx and BW queries stay as they are, so we then call them BW queries.

00:04:23 That's a very important point because we have a lot of confusion here and we get a lot of
questions concerning this. So when it comes to the front-end tools, the BEx front ends are no
longer supported,
00:04:34 but of course all the investments which you've made into BEx queries are safe. You can still
continue to leverage these queries, and in the BW/4HANA context you will basically of course
also be able to define new queries.
00:04:44 So the whole concept of BW or BEx Query will stay. And last but not least, we don't currently
have an NLS interface for partner solutions.
00:04:57 And also I don't think we currently have any plans concerning this. So those are basically the
hard facts,
00:05:02 that's what you really have to consider and the status which you have to achieve before you
can move your system to BW/4HANA. Gordon already mentioned some of the temporary
limitations which are basically in the area of add-ons.
00:05:17 Currently, we don't support any of the add-ons which you know from BW. This is potentially
subject to change we will consider and look at each of the add-ons
00:05:27 and consider and have a look at whether it's still relevant for our customers. We'll also take
customer feedback into account, of course.
00:05:35 Basically, the most prominent one is planning, which we will definitely support in late 2017.
Yes, that's it basically concerning the different paths to BW/4HANA
00:05:53 and the things which we have to be aware of and which we have to consider. Now let's have a
look at the tool support which is provided by the Starter Add-On.
00:06:03 As Ulrich said, with the Starter Add-On, we provide a "data flow transfer" tool. And here it is
possible to transfer complete data flows.
00:06:15 It looks a bit like the old migration tool with the transaction RSMIGRATE it's more or less the
same. You can define objects, then start creating a copy of the objects.
00:06:26 We fully automatically generate HANA- optimized objects out of the, let's say, classic BW
objects here. Imagine you have a data flow starting with a multiprovider and some underlying
cubes and some DSOs and the data source.
00:06:43 Then we will easily transfer exactly those objects to SAP BW/4HANA-supported objects which
are the CompositeProvider, the Advanced DSOs, we will of course copy the queries here as
well.
00:06:57 And this is done fully automatically. One thing which you have to take into account is that it's
really a copy of the data flow, and the data is not copied.
00:07:07 So it's really just providing the new metadata, and the data has to be handled separately.
That's something which we'll talk about in the chapter about new developments and the
outlook,
00:07:19 how we will handle this and how we will support this in the future in a smoother way. So I
guess it's time to show a demo of the data flow migration tool.
00:07:27 Let's switch to the demo system here you go. So here we are, we have several transactions
here.
00:07:35 You see on the left-hand side actually the objects which we want to migrate, or to transfer in
this demo. The transaction code is "rsb4htrf".
00:07:53 So here we have the data flow transfer tool. Now we will choose an object here.
00:07:59 You can see our data flow starts with a MultiProvider here, and then we have different
DataStore Objects in the path, and we have InfoCubes there as well, and so on and so forth.
00:08:08 So now we will transfer the whole data flow, starting with the MultiProvider and the queries
maybe. So let's start with the MultiProvider here.
00:08:19 This is the MultiProvider "Oepm_np01", and then you can choose in the settings that we will
start with "Definition of Target Objects".
00:08:31 It's possible to copy the queries here as well, then you select "Copy Queries", so let's do that.
So here you can see this is the original object, which is an InfoCube, a MultiProvider, or a
DataStore Object,
00:08:47 and we will transfer those objects to Advanced DataStore Objects, Composite Providers,
Transformations are still the same, DTPs are still the same, and the queries we copied here as
well.
00:09:01 Here you can see the name of the target objects, we have a special namespace here, and so
on and so forth. But you can basically change this, right?
00:09:09 Some of the things can be changed: The names can be changed, the descriptions can be
changed, sometimes if the description, for example, contains the word "InfoCube", you'd
probably want to change this.
00:09:17 So you have ways to influence what's going on here. So I will save that one, go back,
00:09:23 and then I can easily execute a transfer. I'll do that right now.
00:09:30 - Here you can see... - It looks like the system is working... Now all the objects we will transfer,
the system will create a copy or will create new objects,
00:09:44 it will create a Composite Provider, it will create advanced DataStore objects for persistency, it
will create the DTPs, the transformations, and all that stuff.
00:09:56 So since we didn't specify a Target InfoArea in this case, where are the objects going to be
found in the end? Ah, there's a little bit of red...
00:10:16 Let's continue, let's see what worked and what didn't. Maybe we can refresh on the right-hand
side I think we should find the objects here.
00:10:23 So we had some error now, but let's see if we find at least the objects for which things worked.
Oh, we find a CompositeProvider with a number of queries on top, so that's pretty good.

00:10:35 We find a number of DataStore Objects. And if we compare this carefully, we should see that
for all the old DataStore Objects
00:10:45 we find a corresponding new Advanced DataStore Object, and the same should be true for the
InfoCubes. So apart from the small error, which just shows you that this is a live demo I guess,
it looks like the conversion worked.
00:11:01 So let's get back to the slides.
00:11:05 Now we showed you what's currently working and it's clear that this is only an initial step
because especially the migration of data will be a very important and also painful task,
00:11:16 and this is of course something which you don't want to do manually with additional test efforts
but where you actually expect a lot of tool support.
00:11:23 So for the future we basically have two different additional conversion tools which will allow
different approaches to conversion, and both of them take into account the migration of data
as well.
00:11:38 The first method will be the "in-place conversion". The in-place conversion basically happens
in the following way:
00:11:46 You start in your development system and you select a certain scope just as you did in the
previous examples. So you select certain data flows which you want to migrate.
00:11:58 And then basically instead of creating copies of the objects and transferring the old objects to
new objects, this will be handled in kind of an in-place way, as the name suggests.
00:12:11 So what's going to happen is that if you have a MultiProvider with a given name, you will end
up with a Composite Provider with the same name.
00:12:18 So it's not a copy in that sense, but it's really a new object with exactly the same technical
name. And the same is true for InfoCubes, which will be converted into advanced DataStore
objects with the same name.
00:12:31 And we will also take into account the movement of data in this case, right? So data will be...
During the conversion of these objects when an InfoCube is kind of converted into an
advanced DataStore object,
00:12:42 we will also fill the data tables of the Advanced DataStore accordingly, which is of course quite
a tricky task because, if you remember what we showed you in the past,
00:12:53 for example, the tables of the Advanced DataStore Objects typically don't contain the SID
values any more but the characteristic values,
00:13:01 so all these conversions have to be taken into account, but that's basically done automatically
by the system. Transformations will be adjusted and created newly because, from a technical
perspective, the object does change.
00:13:14 Even though it keeps the technical name it's a slightly different object, so the transformation
has to be changed and touched. And once you've done this with a certain scope, you have
basically two approaches:
00:13:26 You can either do this for all of your development system, which is of course a very long
journey and requires a lot of iterations.
00:13:37 So that's one way. The other way would be just to do this for a limited scope and then do a
transport of the new objects into the new target system.
00:13:45 And there, of course, it's also a little bit tricky because the system will have to be able to
handle the conversion basically in one of the transport phases, in the after-transport here.
00:13:58 So what has to happen is that somehow the old objects have to be deleted but the data has to
be saved first, then the new objects will come in and then the data conversion will happen just
as it did in the development system.
00:14:11 And I would suggest the recommended approach in most cases is to do this step by step, not
convert the full development system first and then bring a huge transport into the productive
system,
00:14:21 but really do this scope by scope and do this gradually over time. And basically in the end,
when you've done this for all of your data flows,
00:14:32 then your system is ready for the BW/4HANA conversion from that perspective, and you can
just do an upgrade to BW/4HANA.
00:14:40 That's the in-place path now let's come to the "remote conversion". We will provide you with
a remote conversion option as well.
00:14:47 This means you start with a fresh installation, a "greenfield" installation of SAP BW/4HANA,
and then you choose your scope as for the other option as well,
00:14:59 and then you transport exactly the objects within the scope to your SAP BW/4HANA, and
during the import and activation, we will automatically convert the objects from
00:15:12 the classic BW objects into SAP BW/4HANA- optimized objects, or HANA-optimized objects.
Then you have the metadata in the system,
00:15:22 and afterwards it is possible to transfer the data from the selected scope, from the SAP BW
system into your SAP BW/4HANA systems where the HANA-optimized objects are located.
00:15:37 So then you can start the data transfer of exactly those objects, and then we will load the data
from the, let's say, historical objects into the newly created objects.
00:15:50 This is for test reasons. Then you have on the SAP BW/4HANA the new objects, the data
flows with the new objects including the data,
00:15:59 and then you can start the transport from the development system of the new BW/4HANA
system, so then you transport the metadata, or the objects from development into production,
00:16:12 then you have the new data flows in the productive area, and then you can immediately start to
transfer the data from the historical BW system into a newly created SAP BW/4HANA system,

00:16:27 and then the data is ready for reporting here. I think this is quite a simple way.
00:16:32 Why transfer the data to the selected objects in the development system as well? This is just
for testing reasons, so that you can see that the data is ready, everything is working well, and
so on and so forth.
00:16:46 If you compare the two approaches, one of the advantages of this approach will probably be
that because you only select
00:16:54 a transport of the scope which you selected in the original system, you don't actually modify
the existing system, so this will stay in place, it's safe, you don't have to do any changes here,

00:17:04 and you gradually build up the SAP BW/4HANA system, so that's a nice behavior of this
method. Okay, let's summarize what we've talked about.
00:17:17 Please make sure that you understand all the prerequisites for moving to BW/4HANA.
Especially that all your BW objects have to be moved and transferred or converted to HANA-
optimized objects,
00:17:30 especially for the InfoProviders, but also think about what this means for source systems and
so on. We also mentioned the temporary limitations which we have with BW/4HANA.
00:17:43 Those I wouldn't take too seriously. As we said, by the middle or end of 2017 at least
considering the planning perspective they should be gone
00:17:52 and other add-ons will follow, so that's something which you shouldn't worry too much about.
That's something we're working on.
00:18:00 You've seen the scope of the current migration which is part of the SAP BW/4HANA Starter
Add-On,
00:18:08 and we've also talked about the innovations which are going to come and at least roughly
described what the plans will be to support this in future.
00:18:21 - And please don't forget... - Now I'm the one forgetting to tell you that you should do your self-
test! Thanks, Gordon.
00:18:29 You're welcome.
Week 1 Unit 4

00:00:12 Hello and welcome to Unit 4 "Development Roadmap for SAP BW/4HANA". In this unit, we
will give you an outlook on the upcoming features of BW/4HANA
00:00:21 in the four categories which we mentioned in the units before, namely simplicity, openness,
modern interface, and high performance.
00:00:29 So let's talk about the roadmap. As you can see at the bottom of the slide, in the next year we
will ship one feature pack each quarter.
00:00:38 So you can see we will ship these feature packs in quite high frequency. So we will start in Q1
2017, followed by Q2 2017.
00:00:48 And in the future, we will ship of course more of them. We will start early in 2017.
00:00:57 And we will ship some features later on here as well. So why can we do such frequent
shipments?
00:01:05 This is due to the fact that we are decoupled from the NetWeaver release cycles. Therefore it
is possible to ship many of these feature packs here.
00:01:14 So we will start in Q1 2017 with the next feature pack. And here we have some BW/4HANA
optimized business content.
00:01:22 So we will deliver two different business content packages here. One package is for the Basis
stuff and for the administration stuff,
00:01:32 and the other one is for the industry-specific areas. So in the industry-specific areas we will
deliver specific data models or specific data flows
00:01:43 for exactly the industries containing the SAP BW/4HANA optimized objects, which contain a
composite provider and all the other objects which we mentioned before.
00:01:56 Yeah. So let's come to Q2 2017.
00:02:02 And there we will basically enhance what we showed you in the functional overview of this
openSAP class. Remember we talked about the concept of hot and warm data
00:02:15 and the new ideas which we have there with the extension node concept. We're going to
elaborate on this even further to work not only for hot and warm data but even for cold data.
00:02:26 So that basically means that first of all we will integrate or we will enhance this to also
comprise our NLS solution, and do it in a way such that basically on partition level of an
advanced DataStore object
00:02:40 you can decide whether data or a certain partition should reside in hot memory, in warm
memory, or actually even in cold memory. And the handling will be exactly the same in all
cases.
00:02:50 All the movement of data will behave the same whether you bring it from hot to warm, from
warm to cold, or from hot to cold. All the situations will behave exactly the same.
00:03:00 Yeah, and for the future we are working on tighter integration with our cloud offerings like
SuccessFactors and Ariba. And we will bring exactly those offerings into the BW/4HANA
world.
00:03:12 So this integration is planned for the future. It's the same for BPC support, the Add-On for BPC
is planned for the end of 2017 to be precise.
00:03:26 Here we will support the standard order embedded BPC version. And we are currently working
on leveraging some ideas for machine learning
00:03:36 and bringing that into BW/4HANA administration. So the idea here is that the system gives you
some tips on how to administrate your BW/4HANA system
00:03:47 based on machine learning ideas. So let's come to openness.
00:03:52 In the area of openness we have two main topics, I would say, that's integration with Big Data
or Data Lake scenarios and the interoperability with the HANA native data warehousing
approach.
00:04:02 Let's have a look at Q1 2017. For the Big Data area, we basically plan to provide a new source
system type dedicated to Hadoop.
00:04:12 We want to drive the integration with Hadoop by providing a Spark SQL adapter. And of
course in all these scenarios, we want to make sure that we are as flexible as you know it for
example from the HANA source system,
00:04:24 that we can both support direct access of federation scenarios and staging scenarios and also
very flexibly switch between both options. When it comes to interoperability with a native data
warehouse approach,
00:04:36 we want to provide kind of DataStore functionality on the HANA native side, which is very nice
in mixed scenarios because then you have a very natural way of providing delta from the
HANA native side to the BW side.
00:04:52 So that will make life, especially in mixed scenarios, certainly easier when you physically move
data from the native side to the BW side. And in the other direction, when it comes to opening
up BW for SQL consumption,
00:05:07 we will further provide additional functionality for the HANA View generation of BW queries. So
in the next feature pack planned for Q2 2017, we are following these principles
00:05:22 and we will integrate the Big Data scenarios Data Lakes more tightly. Therefore we will
support the possibility of running a HANA analysis process in Spark or Hadoop.
00:05:36 And then the execution is done on Hadoop or Spark. Then you can leverage the advantages
from Spark and Hadoop,
00:05:42 and you can bring the data into BW exactly that The next point is we will integrate HANA
native tables tighter.
00:05:53 Therefore we will enhance the functionalities of enterprise information management into
BW/4HANA. And last but not least, we will give you the possibility of creating or generating a
HANA view out of Open ODS view,
00:06:11 to bring the native world and the BW world closer together, and support exactly that kind of
mixed scenario much better here.
00:06:20 Yeah. Now let's look at the long-term perspective, especially in the Big Data area.
00:06:27 There we see a huge demand from our customer side to, in the long term, build scenarios
which reach over Hadoop and the data warehouse as a whole basically,
00:06:39 so that you for example bring in data which is possibly semi-structured or completely
unstructured into Hadoop first, have certain processes that derive certain structure out of this
data and then eventually, after this process is finished,
00:06:54 move it over to the data warehouse for further analysis at a certain higher speed. And of
course the orchestration of such scenarios needs some sort of overarching tool and
perspective.
00:07:05 That's something which we're working on so that you basically have one tool which allows you
to build very complex data flows which couple certain executions on the Hadoop side with
executions on the BW side, know all the dependencies,
00:07:20 and of course also allow you to monitor such solutions. That's the long-term perspective here
and you will certainly hear more about this basically in 2017.
00:07:30 So let's go to the modern user interface. As we said before in the other units, the data flow
modeler for modeling.
00:07:39 And down the road, we will follow these two principles that we have UI5-based interfaces for
administrators, and Eclipse-based modeling is possible for modelers and developers.
00:07:55 And therefore we will bring the transformation into Eclipse here. Then everything is possible on
the Eclipse side.
00:08:01 No SAP GUI is needed anymore. And for the administration, we will give you an opportunity to
fully use the Web.
00:08:11 And therefore we are developing some Web- based administration features and tools for you.
Yeah, and finally when it comes to high performance, well you know that we're always working
on this.
00:08:24 This is an ongoing task basically on a daily basis for our colleagues in development. And we
just summarize two of the main points here.
00:08:35 And I think it's very clear that we focus here both on the OLAP side, so the pushdown of OLAP
capabilities into the HANA database to leverage the capabilities of HANA even further
00:08:46 and then provide better performance for more advanced query features. But also you see that
we for example put some efforts here into loading of master data
00:08:57 because master data has in the past not benefitted from all of the optimizations which we did,
for example in the advanced DataStore object area.
00:09:09 And we take this into account by basically providing or building the structures of the master
data tables in a similar way as the advance DataStore objects so that you have similar
features from a loading perspective, from a rollback perspective,
00:09:22 as you have it with the transactional data in advanced DataStore objects right now. Yeah, and
I guess now it's time to summarize what we learned in this unit and it's also time to summarize
the whole course.
00:09:33 Exactly, so what are the key takeaways from this unit? As you could see, we will strongly work
or we will strongly develop tools and tool sets
00:09:47 to integrate Big Data scenarios and bring Big Data scenarios into the SAP BW/4HANA world.
Furthermore, we are currently working on the add-ons.
00:09:58 We will bring the add-ons to BW/4HANA as well. So here we will start with the planning add-
on,
00:10:05 This is planned to support you by the end of 2017. And it's the same I think for data
management here as well.
00:10:16 Yeah, that's also going to come in 2017. Now the nice thing for you is there's no self-test after
this unit.
00:10:24 But there is the final exam, which you should prepare for now. And of course we both wish you
good luck with
00:10:30 Yeah, good luck.
www.sap.com

2016 SAP SE or an SAP affiliate company. All rights reserved.


No part of this publication may be reproduced or transmitted in any form
or for any purpose without the express permission of SAP SE or an SAP
affiliate company.
SAP and other SAP products and services mentioned herein as well as their
respective logos are trademarks or registered trademarks of SAP SE (or an
SAP affiliate company) in Germany and other countries. Please see
http://www.sap.com/corporate-en/legal/copyright/index.epx#trademark for
additional trademark information and notices. Some software products
marketed by SAP SE and its distributors contain proprietary software
components of other software vendors.
National product specifications may vary.
These materials are provided by SAP SE or an SAP affiliate company for
informational purposes only, without representation or warranty of any kind,
and SAP SE or its affiliated companies shall not be liable for errors or
omissions with respect to the materials. The only warranties for SAP SE or
SAP affiliate company products and services are those that are set forth in
the express warranty statements accompanying such products and services,
if any. Nothing herein should be construed as constituting an additional
warranty.
In particular, SAP SE or its affiliated companies have no obligation to pursue
any course of business outlined in this document or any related presentation,
or to develop or release any functionality mentioned therein. This document,
or any related presentation, and SAP SEs or its affiliated companies
strategy and possible future developments, products, and/or platform
directions and functionality are all subject to change and may be changed by
SAP SE or its affiliated companies at any time for any reason without notice.
The information in this document is not a commitment, promise, or legal
obligation to deliver any material, code, or functionality. All forward-looking
statements are subject to various risks and uncertainties that could cause
actual results to differ materially from expectations. Readers are cautioned
not to place undue reliance on these forward-looking statements, which
speak only as of their dates, and they should not be relied upon in making
purchasing decisions.

Вам также может понравиться