Tuesday, August 13, 2013
C Programming Memory Debugging: Valgrind
I wanted to spread the word about Valgrind, a memory management debugger for C programs. I just found out about it and in the last hour or so has already caused me to identify numerous bugs in my CFD program. My CFD program (which is an unsteady, multi-block, structured grid, incompressible Navier-Stokes solver) is the largest program I have ever written and bugs occasionally pop up when inputs change (in my case, larger grids and thus larger memory allocations), and in my search for solutions I came across Valgrind. It is also very easy to use and well documented. Go to the Valgrind website and check out the 'Quick Start' section to try it out yourself.
Tuesday, August 6, 2013
Writing an Incompressible Flow Solver
To all that have requested help from me directly by sending messages I hope your issues are resolved. Sorry if I never got back to you and missed your message.
For the past few months I have been enthusiastically writing my own CFD code. It is an incompressible Navier-Stokes (INS) solver. The two reasons why I decided to write my own code instead of using OpenFOAM are the following:
1. Nested Sliding Interface Capability.
2. Time Accuracy.
I detail these reasons in the following paragraphs.
The First Reason
OpenFOAM AMI/GGI is hard-coded to handle only one rotating region. Users however have figured out a way to use more than one in the same simulation (it is somewhere on the CFD-online forums). However, to my knowledge there is no way to nest them. What I mean by nesting is putting one rotating region inside of another. AFAIK Star-CCM+ is capable of this, but that is out of my budget (which is non-existent), and the second reason I listed above still remains a problem.
The Second Reason
You would think that compared to the compressible NS (CNS) equations, the incompressible would be easier, since you get to make the assumption that density does not change and set all of the density derivatives to zero. It may be a simplifying assumption to some degree, but in fact, the solution method for the INS is less straightforward. With the CNS, the equation of state (pressure, density, temperature equation) closes the equation set. With the INS, you can no longer use this closing equation. What most people utilize now to address this is a bit of a dirty solution: Operator Splitting. Without the equation of state, there is no explicit equation for pressure, but it appears in the momentum equations of the INS. The widely employed dirty solution is to split the equations so you solve for approximate pressure and approximate velocity and have them relate to each other to satisfy continuity. As you might guess, this has been shown to reduce the temporal order of accuracy of whatever scheme you use. Temporal accuracy is especially important for turbomachinery-type simulations.
There is a way to solve for the INS without operator splitting, by utilizing vorticity formulations. However, these methods have major practicality drawbacks. These difficulties include specifying boundary conditions and complications that arise in three dimensions.
After some research I came across the Exact Fractional Step (EFS) Method by J.B. Perot. With this method you can solve for the INS exactly, because pressure is eliminated. Furthermore, primitive variables are used, including in the specification of boundary conditions. Basically the biggest issues of both the vorticity formulations and traditional operator-splitting methods are fixed, while retaining their relative advantages. To me it seems like this is THE way to solve the INS. Why is it not already widely used? Well according to Dr. Perot, whom I have contacted personally by email, is that humans are creatures of habit. Ha ha. Another reason I can immediately think of is that a particular arrangement of flow quantities is required. I am using a staggered variable arrangement, where velocity components are at the cell edges, and scalar quantities (such as pressure or turbulence variables) are stored at cell centers. The staggered method is not as common in CFD solvers as co-located arrangements, where all velocity components and scalar quantities are stored at the nodes, due to the fact that staggered arrangements are less straightforward to accurately transform in structured grids. As I understand, using staggered arrangements in unstructured grids is a topic of research of Dr. Perot's. I forgot to mention that I am using structured grids in my code and am only interested in them, as opposed to unstructured grids. Anyways, I did find papers by Pieter Wesseling, who is now retired and has a textbook published on CFD, that shows how to transform staggered arrangements for good accuracy with relate ease and practicality.
Status
So far I have implemented a laminar multiblock structured grid explicit solver and tested it on a lid-driven cavity simulation. The results are good. Right now, I am in the finishing stages of implementing a Spalart-Allmaras turbulence model. I hope to soon run and post results of a high-Re airfoil simulation.
Although it is fully explicit, I still have to invert a system (i.e. solve a matrix Ax=b problem) as part of the EFS. As a side note, traditional operator-splitting methods also have to always invert a system for the pressure, and the error is in continuity. However for the EFS, the error is in vorticity, and the continuity is always satisfied to machine precision. It stands to reason that continuity error is more detrimental than vorticity error.
Right now, I am using a canned Conjugate-Gradients (CG) algorithm by Richard Shewchuk as the solver. I plan to in the distant future to implement this algorithm on the GPU, and CG seems well-suited to this. I am also interested in Algebraic Multigrid Preconditioning, which tends to reduce number of iterations by an order of magnitude. But until all of the desired solver features are done, vanilla CG will be good enough.
This has been quite a journey and I have learned very much. Thanks for reading.
For the past few months I have been enthusiastically writing my own CFD code. It is an incompressible Navier-Stokes (INS) solver. The two reasons why I decided to write my own code instead of using OpenFOAM are the following:
1. Nested Sliding Interface Capability.
2. Time Accuracy.
I detail these reasons in the following paragraphs.
The First Reason
OpenFOAM AMI/GGI is hard-coded to handle only one rotating region. Users however have figured out a way to use more than one in the same simulation (it is somewhere on the CFD-online forums). However, to my knowledge there is no way to nest them. What I mean by nesting is putting one rotating region inside of another. AFAIK Star-CCM+ is capable of this, but that is out of my budget (which is non-existent), and the second reason I listed above still remains a problem.
The Second Reason
You would think that compared to the compressible NS (CNS) equations, the incompressible would be easier, since you get to make the assumption that density does not change and set all of the density derivatives to zero. It may be a simplifying assumption to some degree, but in fact, the solution method for the INS is less straightforward. With the CNS, the equation of state (pressure, density, temperature equation) closes the equation set. With the INS, you can no longer use this closing equation. What most people utilize now to address this is a bit of a dirty solution: Operator Splitting. Without the equation of state, there is no explicit equation for pressure, but it appears in the momentum equations of the INS. The widely employed dirty solution is to split the equations so you solve for approximate pressure and approximate velocity and have them relate to each other to satisfy continuity. As you might guess, this has been shown to reduce the temporal order of accuracy of whatever scheme you use. Temporal accuracy is especially important for turbomachinery-type simulations.
There is a way to solve for the INS without operator splitting, by utilizing vorticity formulations. However, these methods have major practicality drawbacks. These difficulties include specifying boundary conditions and complications that arise in three dimensions.
After some research I came across the Exact Fractional Step (EFS) Method by J.B. Perot. With this method you can solve for the INS exactly, because pressure is eliminated. Furthermore, primitive variables are used, including in the specification of boundary conditions. Basically the biggest issues of both the vorticity formulations and traditional operator-splitting methods are fixed, while retaining their relative advantages. To me it seems like this is THE way to solve the INS. Why is it not already widely used? Well according to Dr. Perot, whom I have contacted personally by email, is that humans are creatures of habit. Ha ha. Another reason I can immediately think of is that a particular arrangement of flow quantities is required. I am using a staggered variable arrangement, where velocity components are at the cell edges, and scalar quantities (such as pressure or turbulence variables) are stored at cell centers. The staggered method is not as common in CFD solvers as co-located arrangements, where all velocity components and scalar quantities are stored at the nodes, due to the fact that staggered arrangements are less straightforward to accurately transform in structured grids. As I understand, using staggered arrangements in unstructured grids is a topic of research of Dr. Perot's. I forgot to mention that I am using structured grids in my code and am only interested in them, as opposed to unstructured grids. Anyways, I did find papers by Pieter Wesseling, who is now retired and has a textbook published on CFD, that shows how to transform staggered arrangements for good accuracy with relate ease and practicality.
Status
So far I have implemented a laminar multiblock structured grid explicit solver and tested it on a lid-driven cavity simulation. The results are good. Right now, I am in the finishing stages of implementing a Spalart-Allmaras turbulence model. I hope to soon run and post results of a high-Re airfoil simulation.
Although it is fully explicit, I still have to invert a system (i.e. solve a matrix Ax=b problem) as part of the EFS. As a side note, traditional operator-splitting methods also have to always invert a system for the pressure, and the error is in continuity. However for the EFS, the error is in vorticity, and the continuity is always satisfied to machine precision. It stands to reason that continuity error is more detrimental than vorticity error.
Right now, I am using a canned Conjugate-Gradients (CG) algorithm by Richard Shewchuk as the solver. I plan to in the distant future to implement this algorithm on the GPU, and CG seems well-suited to this. I am also interested in Algebraic Multigrid Preconditioning, which tends to reduce number of iterations by an order of magnitude. But until all of the desired solver features are done, vanilla CG will be good enough.
This has been quite a journey and I have learned very much. Thanks for reading.
Subscribe to:
Posts (Atom)