I just wanted to share and link some code that I had uploaded to Github a while ago: https://github.com/lordvon?tab=repositories
Mainly there are two things:
-OpenFOAM Code.
-Incompressible flow solver, and related utilities.
The incompressible flow solver is based on J.B. Perot's Exact Fractional Step Method. I wanted to use this instead of OpenFOAM's PISO for transient flows. Theoretically and practically in my opinion the Exact Fractional Step Method is superior; you can find more information from Perot directly with a simple Google search. From scratch, I coded it in C (with the awesome tool VALGRIND) and got to the point of being able to simulate a 2D box in a wind tunnel using an unstructured-multiblock grid. However, the additional features I required (3D, sliding mesh interface) proved too much in terms of time and effort (especially debugging!), so I reverted back to OpenFOAM.
The OpenFOAM modules are very useful and I use them heavily today. They include mesh motion functions for the AMI moving region and turbulence models. Take a look at them if you are interested.
-Nested rotating interfaces: regions rotating and translating inside of another rotating region, i.e. cyclorotor simulations.
-Ramped rotating motion: linear speedup of rotational velocity, to allow for Courant numbers not affected too much by high-velocity transients. This is supposed to allow for faster resolution of low-frequency starting phenomenon in transient simulations.
-Proper and Official Spalart-Allmaras Turbulence Model: From a research publication written by Spalart himself in 2013. He acknowledges the confusion and variation of in the popular RANS model and establishes the correct and most up-to-date way to implement it. It differs quite a bit from the OpenFOAM standard S-A model.
-Spalart-Allmaras with ROTATION/CURVATURE CORRECTION: Detailed in Zhang and Yang (2013), this model utilizes the Richardson number to sensitize the S-A to rotation and curvature (R/C), while bypassing the significant additional computation required by standard S-A R/C correction models. The more efficient model is shown in the paper to provide almost identical results to the traditional model in several test cases.
-And others... take a look!
Saturday, December 21, 2013
Friday, December 20, 2013
FEA Convergence: Order of Elements and Sharp Corners!
I found through experience that the caution against using first order elements in FEA is well-founded. Doing a convergence test on a simple rectangular plate fixed at both ends loaded with transverse and normal pressure forces showed that maximum displacement values converged with 10-100 times more nodes using first-order elements compared to using second-order elements. Wow.
However, I could not get maximum von Mises stress to converge due to the presence of sharp corners at the fixed ends, no matter the order of the elements. This apparently is a well known quirk of FEA for so-called 're-entrant corners'.
UPDATE: It turns out there was a dissonance between my intended physical model and the one I implemented in the *BOUNDARY card. All I had to do was add rotational constraints at the boundary. This was revealed to be the problem after I tried to fix the last two layer of nodes at each end.
Although the max stress is more tame now, the stress still does not converge completely. I notice that the max stress, after a certain amount of nodes, moves to the corner from the node adjacent to the corner. I consider the point where the max stress is still not on the corner to be the actual estimated max stress. This is also around the point the maximum displacement converges.
Thus I have found a satisfactory solution for my sharp corner convergence issue.
UPDATE 2: Perhaps a better convergence criterion would be strain energy. I have yet to test this.
However, I could not get maximum von Mises stress to converge due to the presence of sharp corners at the fixed ends, no matter the order of the elements. This apparently is a well known quirk of FEA for so-called 're-entrant corners'.
UPDATE: It turns out there was a dissonance between my intended physical model and the one I implemented in the *BOUNDARY card. All I had to do was add rotational constraints at the boundary. This was revealed to be the problem after I tried to fix the last two layer of nodes at each end.
Although the max stress is more tame now, the stress still does not converge completely. I notice that the max stress, after a certain amount of nodes, moves to the corner from the node adjacent to the corner. I consider the point where the max stress is still not on the corner to be the actual estimated max stress. This is also around the point the maximum displacement converges.
Thus I have found a satisfactory solution for my sharp corner convergence issue.
UPDATE 2: Perhaps a better convergence criterion would be strain energy. I have yet to test this.
Friday, December 13, 2013
Compiling CalculiX CrunchiX (CCX) from Source
I was having errors using CCX provided as the Linux executable on the CalculiX website, so I decided to compile it from source so I can finally get it working. I followed the compilation steps detailed on the CalculiX website, in addition to the following few changes.
I compiled on Ubuntu 12.04 LTS.
In order to compile CalculiX, you also need to install SPOOLES.2.2 and ARPACK.
Installing SPOOLES.2.2 is straightforward, just follow their directions on their website.
Installing ARPACK required a bit more modification. For me, I had to make the following changes in 'ARmake.inc':
-Add '-lg2c' flag to LDFLAGS, LNFLAGS, and FFLAGS
-Changed the fortran compiler to gfortran
-Changed the MAKE directory to /usr/bin/make (original: /bin/make)
-Changed the PLAT to INTEL, rather than SUN4
And the following change to 'UTIL/second.f':
-Changed the two lines:
REAL ETIME
EXTERNAL ETIME
To:
EXTERNAL REAL ETIME
Then, I was able to compile CCX as detailed on the website.
Another note about the deck input... I was trying to use the solver on the test cases as provided on the CalculiX website and took me a little while to figure out how to get the post-processed results (colorful stress and strain fields, etc.). I thought the results were written to *.frd file by *EL PRINT (which are already in the test input files), but the Datasets to be displayed in CGX actually have to be written by *EL FILE and *NODE FILE. So, to get the stress and displacement fields I had to type the following into the deck input file:
*EL FILE
S
*NODE FILE
U
Where S denotes stress and U denotes displacement. For the rest of the keywords for other fields, you can check the CCX Manual.
I compiled on Ubuntu 12.04 LTS.
In order to compile CalculiX, you also need to install SPOOLES.2.2 and ARPACK.
Installing SPOOLES.2.2 is straightforward, just follow their directions on their website.
Installing ARPACK required a bit more modification. For me, I had to make the following changes in 'ARmake.inc':
-Add '-lg2c' flag to LDFLAGS, LNFLAGS, and FFLAGS
-Changed the fortran compiler to gfortran
-Changed the MAKE directory to /usr/bin/make (original: /bin/make)
-Changed the PLAT to INTEL, rather than SUN4
And the following change to 'UTIL/second.f':
-Changed the two lines:
REAL ETIME
EXTERNAL ETIME
To:
EXTERNAL REAL ETIME
Then, I was able to compile CCX as detailed on the website.
Another note about the deck input... I was trying to use the solver on the test cases as provided on the CalculiX website and took me a little while to figure out how to get the post-processed results (colorful stress and strain fields, etc.). I thought the results were written to *.frd file by *EL PRINT (which are already in the test input files), but the Datasets to be displayed in CGX actually have to be written by *EL FILE and *NODE FILE. So, to get the stress and displacement fields I had to type the following into the deck input file:
*EL FILE
S
*NODE FILE
U
Where S denotes stress and U denotes displacement. For the rest of the keywords for other fields, you can check the CCX Manual.
Finite Element Analysis (FEA): CalculiX
Good morning,
I wanted to share a great piece of software for Finite Element Analysis, which usually refers to structural solvers. I am using it to verify whether certain structures subjected to certain loads safely operate within displacement and yield stress criterion. I am using it for a prototype of an invention of mine, which I am excited to finally be close to building. This prototype is a result of CFD and FEA analysis. Anyway, the website for CalculiX provides all of the software and helpful installation guides and tutorials.
I also wanted to share a troubleshooting tip for the CalculiX CrunchiX (CCX). Calculix has two components: the GUI and the solver. These are named CalculiX GraphiX (CGX) and CalculiX CrunchiX (CCX). I was getting the following error when trying to use the already-compiled provided CCX executable on the CalculiX website:
ccx: error while loading shared libraries: libgfortran.so.2: cannot open shared object file: No such file or directory
I did not have libgfortran.so.2, but I did have libgfortran.so.3. I verified this by:
locate libgfortran.so.3
Using the directory location output from the above command, I linked the .so.3 to .so.2 by:
sudo ln -s /usr/lib/x86_64-linux-gnu/libgfortran.so.3 /usr/lib/libgfortran.so.2
And then my executable no longer had the aforementioned error...
HOWEVER, I kept getting other errors. I could do one of two things... Try to install the package that had libgfortran.so.2, or compile CalculiX from source. Since the package containing libgfortran.so.2 is deemed obsolete by the Ubuntu developers, I could not find and install it easily. So, I decided to compile CalculiX from source, and I successfully did so. I will create another post for details.
I wanted to share a great piece of software for Finite Element Analysis, which usually refers to structural solvers. I am using it to verify whether certain structures subjected to certain loads safely operate within displacement and yield stress criterion. I am using it for a prototype of an invention of mine, which I am excited to finally be close to building. This prototype is a result of CFD and FEA analysis. Anyway, the website for CalculiX provides all of the software and helpful installation guides and tutorials.
I also wanted to share a troubleshooting tip for the CalculiX CrunchiX (CCX). Calculix has two components: the GUI and the solver. These are named CalculiX GraphiX (CGX) and CalculiX CrunchiX (CCX). I was getting the following error when trying to use the already-compiled provided CCX executable on the CalculiX website:
ccx: error while loading shared libraries: libgfortran.so.2: cannot open shared object file: No such file or directory
I did not have libgfortran.so.2, but I did have libgfortran.so.3. I verified this by:
locate libgfortran.so.3
Using the directory location output from the above command, I linked the .so.3 to .so.2 by:
sudo ln -s /usr/lib/x86_64-linux-gnu/libgfortran.so.3 /usr/lib/libgfortran.so.2
And then my executable no longer had the aforementioned error...
HOWEVER, I kept getting other errors. I could do one of two things... Try to install the package that had libgfortran.so.2, or compile CalculiX from source. Since the package containing libgfortran.so.2 is deemed obsolete by the Ubuntu developers, I could not find and install it easily. So, I decided to compile CalculiX from source, and I successfully did so. I will create another post for details.
Tuesday, August 13, 2013
C Programming Memory Debugging: Valgrind
I wanted to spread the word about Valgrind, a memory management debugger for C programs. I just found out about it and in the last hour or so has already caused me to identify numerous bugs in my CFD program. My CFD program (which is an unsteady, multi-block, structured grid, incompressible Navier-Stokes solver) is the largest program I have ever written and bugs occasionally pop up when inputs change (in my case, larger grids and thus larger memory allocations), and in my search for solutions I came across Valgrind. It is also very easy to use and well documented. Go to the Valgrind website and check out the 'Quick Start' section to try it out yourself.
Tuesday, August 6, 2013
Writing an Incompressible Flow Solver
To all that have requested help from me directly by sending messages I hope your issues are resolved. Sorry if I never got back to you and missed your message.
For the past few months I have been enthusiastically writing my own CFD code. It is an incompressible Navier-Stokes (INS) solver. The two reasons why I decided to write my own code instead of using OpenFOAM are the following:
1. Nested Sliding Interface Capability.
2. Time Accuracy.
I detail these reasons in the following paragraphs.
The First Reason
OpenFOAM AMI/GGI is hard-coded to handle only one rotating region. Users however have figured out a way to use more than one in the same simulation (it is somewhere on the CFD-online forums). However, to my knowledge there is no way to nest them. What I mean by nesting is putting one rotating region inside of another. AFAIK Star-CCM+ is capable of this, but that is out of my budget (which is non-existent), and the second reason I listed above still remains a problem.
The Second Reason
You would think that compared to the compressible NS (CNS) equations, the incompressible would be easier, since you get to make the assumption that density does not change and set all of the density derivatives to zero. It may be a simplifying assumption to some degree, but in fact, the solution method for the INS is less straightforward. With the CNS, the equation of state (pressure, density, temperature equation) closes the equation set. With the INS, you can no longer use this closing equation. What most people utilize now to address this is a bit of a dirty solution: Operator Splitting. Without the equation of state, there is no explicit equation for pressure, but it appears in the momentum equations of the INS. The widely employed dirty solution is to split the equations so you solve for approximate pressure and approximate velocity and have them relate to each other to satisfy continuity. As you might guess, this has been shown to reduce the temporal order of accuracy of whatever scheme you use. Temporal accuracy is especially important for turbomachinery-type simulations.
There is a way to solve for the INS without operator splitting, by utilizing vorticity formulations. However, these methods have major practicality drawbacks. These difficulties include specifying boundary conditions and complications that arise in three dimensions.
After some research I came across the Exact Fractional Step (EFS) Method by J.B. Perot. With this method you can solve for the INS exactly, because pressure is eliminated. Furthermore, primitive variables are used, including in the specification of boundary conditions. Basically the biggest issues of both the vorticity formulations and traditional operator-splitting methods are fixed, while retaining their relative advantages. To me it seems like this is THE way to solve the INS. Why is it not already widely used? Well according to Dr. Perot, whom I have contacted personally by email, is that humans are creatures of habit. Ha ha. Another reason I can immediately think of is that a particular arrangement of flow quantities is required. I am using a staggered variable arrangement, where velocity components are at the cell edges, and scalar quantities (such as pressure or turbulence variables) are stored at cell centers. The staggered method is not as common in CFD solvers as co-located arrangements, where all velocity components and scalar quantities are stored at the nodes, due to the fact that staggered arrangements are less straightforward to accurately transform in structured grids. As I understand, using staggered arrangements in unstructured grids is a topic of research of Dr. Perot's. I forgot to mention that I am using structured grids in my code and am only interested in them, as opposed to unstructured grids. Anyways, I did find papers by Pieter Wesseling, who is now retired and has a textbook published on CFD, that shows how to transform staggered arrangements for good accuracy with relate ease and practicality.
Status
So far I have implemented a laminar multiblock structured grid explicit solver and tested it on a lid-driven cavity simulation. The results are good. Right now, I am in the finishing stages of implementing a Spalart-Allmaras turbulence model. I hope to soon run and post results of a high-Re airfoil simulation.
Although it is fully explicit, I still have to invert a system (i.e. solve a matrix Ax=b problem) as part of the EFS. As a side note, traditional operator-splitting methods also have to always invert a system for the pressure, and the error is in continuity. However for the EFS, the error is in vorticity, and the continuity is always satisfied to machine precision. It stands to reason that continuity error is more detrimental than vorticity error.
Right now, I am using a canned Conjugate-Gradients (CG) algorithm by Richard Shewchuk as the solver. I plan to in the distant future to implement this algorithm on the GPU, and CG seems well-suited to this. I am also interested in Algebraic Multigrid Preconditioning, which tends to reduce number of iterations by an order of magnitude. But until all of the desired solver features are done, vanilla CG will be good enough.
This has been quite a journey and I have learned very much. Thanks for reading.
For the past few months I have been enthusiastically writing my own CFD code. It is an incompressible Navier-Stokes (INS) solver. The two reasons why I decided to write my own code instead of using OpenFOAM are the following:
1. Nested Sliding Interface Capability.
2. Time Accuracy.
I detail these reasons in the following paragraphs.
The First Reason
OpenFOAM AMI/GGI is hard-coded to handle only one rotating region. Users however have figured out a way to use more than one in the same simulation (it is somewhere on the CFD-online forums). However, to my knowledge there is no way to nest them. What I mean by nesting is putting one rotating region inside of another. AFAIK Star-CCM+ is capable of this, but that is out of my budget (which is non-existent), and the second reason I listed above still remains a problem.
The Second Reason
You would think that compared to the compressible NS (CNS) equations, the incompressible would be easier, since you get to make the assumption that density does not change and set all of the density derivatives to zero. It may be a simplifying assumption to some degree, but in fact, the solution method for the INS is less straightforward. With the CNS, the equation of state (pressure, density, temperature equation) closes the equation set. With the INS, you can no longer use this closing equation. What most people utilize now to address this is a bit of a dirty solution: Operator Splitting. Without the equation of state, there is no explicit equation for pressure, but it appears in the momentum equations of the INS. The widely employed dirty solution is to split the equations so you solve for approximate pressure and approximate velocity and have them relate to each other to satisfy continuity. As you might guess, this has been shown to reduce the temporal order of accuracy of whatever scheme you use. Temporal accuracy is especially important for turbomachinery-type simulations.
There is a way to solve for the INS without operator splitting, by utilizing vorticity formulations. However, these methods have major practicality drawbacks. These difficulties include specifying boundary conditions and complications that arise in three dimensions.
After some research I came across the Exact Fractional Step (EFS) Method by J.B. Perot. With this method you can solve for the INS exactly, because pressure is eliminated. Furthermore, primitive variables are used, including in the specification of boundary conditions. Basically the biggest issues of both the vorticity formulations and traditional operator-splitting methods are fixed, while retaining their relative advantages. To me it seems like this is THE way to solve the INS. Why is it not already widely used? Well according to Dr. Perot, whom I have contacted personally by email, is that humans are creatures of habit. Ha ha. Another reason I can immediately think of is that a particular arrangement of flow quantities is required. I am using a staggered variable arrangement, where velocity components are at the cell edges, and scalar quantities (such as pressure or turbulence variables) are stored at cell centers. The staggered method is not as common in CFD solvers as co-located arrangements, where all velocity components and scalar quantities are stored at the nodes, due to the fact that staggered arrangements are less straightforward to accurately transform in structured grids. As I understand, using staggered arrangements in unstructured grids is a topic of research of Dr. Perot's. I forgot to mention that I am using structured grids in my code and am only interested in them, as opposed to unstructured grids. Anyways, I did find papers by Pieter Wesseling, who is now retired and has a textbook published on CFD, that shows how to transform staggered arrangements for good accuracy with relate ease and practicality.
Status
So far I have implemented a laminar multiblock structured grid explicit solver and tested it on a lid-driven cavity simulation. The results are good. Right now, I am in the finishing stages of implementing a Spalart-Allmaras turbulence model. I hope to soon run and post results of a high-Re airfoil simulation.
Although it is fully explicit, I still have to invert a system (i.e. solve a matrix Ax=b problem) as part of the EFS. As a side note, traditional operator-splitting methods also have to always invert a system for the pressure, and the error is in continuity. However for the EFS, the error is in vorticity, and the continuity is always satisfied to machine precision. It stands to reason that continuity error is more detrimental than vorticity error.
Right now, I am using a canned Conjugate-Gradients (CG) algorithm by Richard Shewchuk as the solver. I plan to in the distant future to implement this algorithm on the GPU, and CG seems well-suited to this. I am also interested in Algebraic Multigrid Preconditioning, which tends to reduce number of iterations by an order of magnitude. But until all of the desired solver features are done, vanilla CG will be good enough.
This has been quite a journey and I have learned very much. Thanks for reading.
Subscribe to:
Posts (Atom)