Вы находитесь на странице: 1из 4

Accessing Abaqus Database Using Python Scripting

Matteo Carrara mcarrara@gatech.edu December 18, 2012

Introduction

This short tutorial shows how to access the history output database of Abaqus in a convenient way using Python scripting. This is very useful when a large amount of data needs to be extracted for sophisticated post-processing purposes. As a complement it is also shown how the process of starting the analysis and gathering the data can be completely automated using Shell scripting under Linux OS. Before getting started, I want to point out that this is NOT an introduction neither to Python nor to Abaqus. It is only a tutorial on how to have things up and running quickly and painlessly when dealing with a lot of output data for sophisticated post processing. I have written the routines by myself, so maybe it is not the most ecient/elegant way of doing it, but it works. If you have any comment/suggestion/improvement to this material, do not hesitate to contact me!

Accessing History Output Database

In order to access a History Output database, this must be created when setting the analysis. I suppose that if you are reading this guide you are familiar with that, so theres no need to explain how to do it. You should also be familiar with the fact that all the analysis results are contained in the Abaqus .odb le. This is a database that can be easily accessed using Python. In order to do so, you need to create a .py le, say report.py, and place it in the same folder of your .odb le. What follows is a script that I created that is able to read nodal data from the .odb le, and it successively writes them to a .dat le that is more convenient to use with Matlab for example (dont worry, I am going to explain every single line of it).
1 2 3 4 5 6 7 8 9 10

# import d a t a b a s e from odbAccess import # data d e c l a r a t i o n A = [] f = open ( <nodesFileNameWE> , r ) cont = 0 for l i n e in f . readlines () : A. append ( l i n e ) cont = cont + 1

11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34

lines number = cont nodevec = range (0 , lines number , 1) # output f i l e d i s p F i l e = open ( <outputFilenameWE> , w ) # pointers odb = openOdb ( path= <odbFileNameWE> ) step = odb . s t e p s [ <stepName> ] # g e t data cont = 0 f o r e l e m e n t i n nodevec : region = s t e p . h i s t o r y R e g i o n s [ Node <partName > . +s t r (A [ 0 ] [ : ] ) ] v a r i a b l e = r e g i o n . h i s t o r y O u t p u t s [ <variableName > ] . data f o r time , data i n v a r i a b l e [ 1 : ] : d i s p F i l e . w r i t e ( %10.4E % ( data ) ) d i s p F i l e . w r i t e ( \n ) cont = cont + 1 # output f i l e c l o s u r e dispFile . close () f . close ()

Listing 1: Python script to access history output database Lets get down to the code now. The rst thing to do is to import the .odb database
1

from odbAccess import

After that, the code reads the nodes number from a text le (<nodesFileNameWE>, where WE means with extension) and I arrange them into an array (nodevec)
1 2 3 4 5 6 7 8 9

A = [] f = open ( <nodesFileNameWE> , r ) cont = 0 for l i n e in f . readlines () : A. append ( l i n e ) cont = cont + 1 lines number = cont node vec = range (0 , lines number , 1)

At this point the output le where the data should be saved for post processing is declared
1

dispFile

= open ( <outputFilenameWE> , w )

The core of the procedure resides then in pointing in the right direction, i.e. to the right region of the database where the data of our interest are shown. This is done with the following commands

1 2

odb step

= openOdb ( path= <odbFileNameWE> ) = odb . s t e p s [ <stepName> ]

It is worth noting that the names associated with the database, and the step, are exactly the names that you have assigned to these entities back when you build the model. To actually get to the data we want we need to go down the pointers road a little bit more. This can be accomplished by two nested for loops
1 2 3 4 5 6 7 8

cont = 0 f o r e l e m e n t i n nodevec : region = s t e p . h i s t o r y R e g i o n s [ Node <partName > . +s t r (A [ 0 ] [ : ] ) ] v a r i a b l e = r e g i o n . h i s t o r y O u t p u t s [ <variableName > ] . data f o r time , data i n v a r i a b l e [ 1 : ] : d i s p F i l e . w r i t e ( %10.4E % ( data ) ) d i s p F i l e . w r i t e ( \n ) cont = cont + 1

Again, the part name is the same you have assigned when building the FE model, while the variable name is the name of the variable you want to retrieve. For example, if you need the displacement in the x2 direction, you would use U2. As a side note, with this technique you can also get the time vector by accessing the time entry in the variable array, instead of the data entry. The last two lines of the script only serve to close les opened during the extraction procedure.
1 2

dispFile . close () f . close ()

Analysis Automation

The script written above can be launched from terminal by typing /path/to/abaqus/executable python report.py It can then be noted that it is far more convenient to automate the procedure of 1. Launch the FE analysis 2. Launch data extraction The automation is particularly useful when launching FE analysis in series. In order to do so, shell scripting can be used. As a side note, this part assumes you are running abaqus under Linux OS. You must be aware that the automation procedure must be changed when using Windows OS.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22

abq path = path / t o / abaqus / e x e c u t a b l e / PARAM = # # i th a n a l y s i s # # cd path / t o / i th / a n a l y s i s / f o l d e r / $abq path j o b=<jobName> cpus=<nCpus> $PARAM i n t e r a c t i v e sleep 1s rm . rm mv . odb output . odb time $abq path python r e p o r t . py echo e \ n echo e echo E x t r a c t i o n completed echo e \ n sleep 1s echo e \ n echo e echo e Job Terminated echo e \ n

Note the keyword interactive when launching the FE simulation. This is a trick to let the computer nish the analysis before extracting the data. Also note the trick in renaming the .odb database in order to use the same python function for all the analysis (without having to change the code for each analysis, provided you need the same data over the same region of the model). The code above can be repeated for every analysis that you need to run, and will save you a lot of time, especially if you need to deal with repeated FE analyses from which you need to perform sophisticated data post-processing (that Abaqus CAE cannot perform). I used this code eectively to perform 2D and 3D Fourier transforms for Lamb wave propagation in composite plate, and it worked like a charm! Well, I hope you found this tutorial useful! Again, if you have any hint on how to improve this, please contact me!

Вам также может понравиться