Simultaneous Digital Design and Additive Manufacture of Structures and Materials

The integration of emerging technologies into a complete digital thread promises to disrupt design and manufacturing workflows throughout the value chain to enable efficiency and productivity transformation, while unlocking completely new design freedom. A particularly appealing aspect involves the simultaneous design and manufacture of the macroscale structural topology and material microstructure of a product. Here we demonstrate such a workflow that digitally integrates: design automation – conception and automation of a design problem based on multiscale topology optimization; material compilation – computational geometry algorithms that create spatially-variable, physically-realizable multimaterial microstructures; and digital fabrication – fabrication of multiscale optimized components via voxel-based additive manufacturing with material jetting of multiple photo-curable polymers. We validate the digital design and manufacturing workflow by designing, fabricating, and testing a series of structures that illustrate capabilities, show how it empowers the exploitation of new design freedom, and even challenges traditional design principles relating form, structure, and function.

are shown in Section 2 and Fig. S2 respectively. This process results in the macroscopic material layout and microstructural parameters at each macroscopic point. This is followed by material compilation process wherein we first determine the microscale geometry and distribution of the fibrous material via a microstructure realization algorithm (see Fig. S4, Fig. S6 and Section 3.1). This is followed by the voxelization sub-process (see Fig. S8 and Section 3.3) which then leads to digital fabrication via additive manufacture of the material and structure. The results of each process/sub-process are shown to either the left or right using the display element of the flowchart. In this text, we describe in full detail each process/sub-process involved in the workflow. Here and subsequently in this text, we use Case 3 of planar problems described in the main text (result in Fig. 2d) to illustrate workflows of various processes and sub-processes. The organization of the rest of this text follows the work flow shown in Fig. S1.  In this section, we describe computational design automation via topology optimization and the intermediate steps involved within in detail (viz. homogenization, forward analysis and sensitivity analysis). For the sake of clarity, we reiterate the general mathematical form of the topology optimization problem described in the main text. The goal here is to simultaneously find an optimal macroscale material layout i.e., topology and optimal microstructure: ̅ ( ̅( ̅ ), ̅ ) subject to ( ̅( ̅ ), ̅ ) ≤ 0 = 1, … , ℎ ( ̅( ̅ ), ̅ ) = 0 = 1, … , ̅ ≤ ̅ ≤ ̅ (S1) Here, as in the main text, is the objective, and ℎ are the inequality and equality constraints and ̅ the optimization variables vector with lower and upper bounds of ̅ and ̅ . ̅ is the displacement field obtained by solving the elastostatics equilibrium condition, ∇σ( ̅) + ̅ = 0 ̅ via finite element method (σ is the stress tensor and ̅ is the body force vector). The stresses, σ are related to strains, ϵ and thus displacements through Hooke's law for continuous media, σ = ( ̅ )ϵ(̅) ( is the material stiffness tensor). The material stiffness tensor, depends on the local microstructure. Following density based SIMP (Solid Isotropic Material with Penalization) approach, is interpolated between void and material using the normalized density, . The distribution of determines the topology with = 0 implying void or non-existence of material and = 1 that of existence. The objective, the constraints, and the displacement field are all functions of the design variables ( , and microstructural parameters) which in turn depend on the optimization variables. Fig. S2 shows the flowchart for our design automation approach based on topology optimization. The design objective and constraints extracted from the engineering design problem along with mechanical loads and boundary conditions constitute the inputs to the workflow. The inputs also include the choice of the microstructure i.e. embedded short fibers in a matrix. The optimization problem is setup after identifying the appropriate design and optimization variables. After initializing the optimization variables and thus the design variables to meaningful values and extracting the material properties via a homogenization scheme, we perform forward analysis where we employ the finite element (FE) method to solve the discretized elastostatics equations to obtain the equilibrium displacement field. The objective and constraints are evaluated using the results obtained from FE analysis. This is followed by sensitivity analysis where we calculate design sensitivities i.e., objective and constraint gradients with respect to the optimization variables. A gradient based optimizer then utilizes the design sensitivities to update the optimization variables and hence the design variables. We extract the new material properties again from the updated design variables via the homogenization scheme and iterate the process until convergence. Discussion on the finer details of the homogenization scheme, forward problem setup, and sensitivity analysis follows.

Mori-Tanaka Homogenization Scheme
The Mori-Tanaka homogenization approach 1,2 provides analytical estimates for our choice of the microstructure, fiber-based composites. We adopted the Mori-Tanaka approach because: i) the theoretical underpinnings and limitations of the method are understood and reasonable for approximate solutions, 2,3 ii) it has been shown to deliver good predictions of the effective elasticity, other tensorial properties, when compared with experiments 3-6 and high-fidelity numerical simulations; 7 and iii) it is expressed in analytical, albeit complicated, form which is advantageous given the computational efficiency gains when compared to numerical homogenization techniques. For the special case of a composite consisting of two isotropic phases, aligned spheroidal reinforcement (fibers) in a matrix, the effective transversely isotropic stiffness tensor, of the composite is given by: In eq. (S2), is the reinforcement volume fraction, is the reinforcement aspect ratio, is the Mori-Tanaka strain concentration tensor, m and f are the isotropic stiffness tensors of the matrix and fiber phases. The Mori-Tanaka strain concentration tensor is given by: Here, is the fourth order identity tensor such that = = and is the dilute strain concentration tensor, which is in turn given by: Here, = ( , ) is the Eshelby tensor 8,9 which is a function of only the aspect ratio, and the Poisson's ratio of the matrix material, . The stiffness tensor, is defined in a local coordinate system attached to the fiber axis and the stiffness tensor in the global coordinate system, = ( , , , , m , f ) is obtained via standard tensor rotation transformations of , where and are the angles between the xaxes of the local and global coordinate systems in the xy-plane and xz-plane respectively. In the optimization problem, we define the microstructure parameters, , , and in terms of the optimization variables, ̅ (more details in the following sub-section).

Forward Analysis and Filtering
We solve the optimization problem by discretizing the problem domain into finite elements. We associate the optimization variables with the nodes of the discretized finite element mesh; the design variables , , , and and thus the material stiffness tensor ( , , , , ) with the elements. The discretized form of the elastostatic equilibrium can be described compactly by the residual vector equation: The global finite element stiffness matrix, K is then defined as K = ̅ ̅ . For linear elasticity, the residual equation is simply: K ̅ − ̅ = 0 ̅ , with ̅ being the global force vector. Forward analysis then involves solving the system of equations (linear system in our case) ̅ = 0 ̅ to obtain ̅.
We remarked in the main text that the design variables are all defined as functions of the optimization variables. This serves to normalize the bounds as well as regularize the design problem in order to achieve mesh independent results and a good 0-1 design i.e., a clear demarcation between void and material. We employ standard regularization techniques based on filtering of the optimization variables. [10][11][12] To ensure smooth designs, we use a linear smoothing filter 10 for the optimization variables. The linear smoothing filter (denoted by ) smears out any sharp transitions and gives smoothly varying fields: Here, is the linear filter weight matrix and is defined as: Here, is the distance between centroids of elements indexed and and is the specified smoothing radius.
Additionally, we apply a projection filter 10,12 on top of the linear smoothing filter when calculating the densities. The projection filter, maps values between given lower and upper bounds to either the lower or upper bound. This ensures a crisp design with a sharp boundary and no intermediate values which is highly desired for densities to obtain a well-defined topology. It is defined as: Here, and are projection filter parameters. When → 0, the projection filter approaches a linear function ( ( ) → ) shown by the black dashed line in Fig. S3. The same figure shows the behavior of the projection filter at various values of . At the limit of → ∞, the projection filter approaches the Heaviside step function with = 0. then serves to shift the threshold along the horizontal axes. It is to be noted that the projection filter with = 0 provides for control of the minimum feature size while our choice = 0.5 does not (refer to Guest et al 10 ); it just gives us a 0-1 design. The relationship between physical design variables ( , , , and ) and optimization variables ( ̅ ) is: Here, and are the maximum and minimum values of the design variable respectively. is either a single ( = ( ̅ )) or composite filter consisting of the linear smoothing and projection filters ( = ( ( ̅ ))).

Sensitivity Analysis and Convergence
Sensitivity analysis involves calculation of the design sensitivities i.e., / ̅ , / ̅ and ℎ / ̅ . We obtain the sensitivities using the commonly used adjoint method. Note that with static equilibrium i.e. ̅ = 0: Likewise, differentiating the objective, and using eq. (S10): Here, ̅ is the solution of the linear system, K T ̅ = / ̅, which constitutes the adjoint problem. In the above equation, the partial derivatives, ̅ ̅ and ̅ are calculated numerically by a finite difference scheme. The constraint sensitivities ̅ and ℎ ̅ are calculated similarly.
After calculating the sensitivities, we perform checks for optimality of the solution (in our case convergence of the objective and constraints such that the change is < 10 -4 in the last 4 iterations). If the optimality checks do not succeed, a gradient based optimization algorithm (GCMMA 13 , globally convergent method of moving asymptotes in our case) calculates a new set of optimization variables, ̅ . We then update the material properties to begin the next optimization iteration. The material compilation process involves two key components. The first is realization of the parametrized microstructure obtained from the design automation process i.e. exact geometry and distribution of the fibers within the matrix. We term this process as microstructure realization. The second is preparation of the realized geometry for fabrication via 3D printing accomplished by the voxelization process. The material compilation workflow is shown in Fig. S4. It starts with microstructure realization, followed by checks to look for possible overlaps or collisions amongst the generated fibers, verification of the volume fractions and ends with the voxelization process. Detailed discussion on each of these steps follows. We initiate the discussion of the microstructure realization algorithm with a simple example, case 3 of the planar structures as mentioned before, and then describe the process for a more general case. The objective, as previously stated, is to actualize the parametric microstructure distribution ( , , and ) obtained from the design automation step to an arrangement of short fibers that best approximates the mechanical behavior of the microstructure.

Microstructure Realization
In the main text, we stated that the fibers are initially arranged in a hexagonal packing pattern and separated by distances > and > 2 : the axial and lateral distances respectively from the centroid of any given fiber and the centroids of immediately neighboring fibers ( = fiber radius, = fiber length; see ). Alternatively, we can interpret that the fibers are encapsulated in an imaginary cylinder of diameter and length as shown in Fig. S5b. The goal then is to arrange these imaginary cylinders in an efficient manner such that a given volume fraction can be achieved. We choose hexagonal close packing as it is the densest for packing cylinders with the maximum possible packing density of /√12. Noting that for the fiber and the imaginary cylinder containing it, the ratios of their volume fractions ) and the volumes ( ) should be equal, we arrive at eq. (S12) which relates the packing parameters, and with fiber radius, and volume fraction, .
We specify and calculate using eq. (S12). This leads us to a fiber arrangement as shown in Fig. S5a with the fibers initially arranged along the packing orientations, and (= 0 in this case). The packing orientations, as mentioned in the main text, are obtained by averaging ( ) and ( ) over the nonsymmetric part(s) of the design (otherwise the average will trivially be zero). The same figure also shows the fibers in their final rotated configurations (Fig. S5c). We achieve the rotations by first obtaining the interpolated values of the orientation fields, ( ) and ( ) at the centroid of the fiber and then rotating the fiber about the centroid after having accounted for the initial rotation. We stagger the fibers from layer to layer for the 2D structures but not for 3D structures which otherwise leads to excessive collisions between fibers. Additionally, we introduce randomness in the fiber positions by rigidly translating them in a random direction by a small distance before rotating the fibers to their final configuration. These random perturbations and the staggering helps in not creating large gaps between the fibers that might create stress concentrations in the matrix material. However, for the example shown in Fig. S5, we did not introduce any randomness for the sake of clarity.
We now describe the more general workflow with the help of a more complicated example, case 4 of the optimal design with planar structures result of which is shown in Fig. 2f in the main text. The formal flowchart is shown in Fig. S6 along with results from the intermediate steps. We start with the abstract microstructure parameters obtained directly from the multi-scale design automation step. We divide the optimized macroscale domain into multiple regions if the range of orientations present is large, for example 0 to 90 o . This decision to divide and the number of divisions are both based on the number of resulting fiber overlaps/collisions which need to be minimized. For example, we divide the domain into two regions for the case shown in Fig. S6 such that the regions with orientation angles greater than 45 o and less than 45 o are separated. The divided regions are independently processed from here on. We calculate the packing directions for each region, fill them using hexagonal close packing and then rotate the fibers about their centroids to complete the microstructure realization process.  We subject the fiber packing configuration to tests to check for collisions and verify the uniformity of the fiber volume fraction. For this purpose, we assumed that the end caps are flat surfaces instead of spherical caps, which would simplify the collision check algorithm but overestimate the number of collisions. The algorithm involves iterating through all the fibers while checking for possible overlap/collision between the current fiber and its nearest neighbors. The nearest neighbors are obtained by searching for fibers with start, end or midpoints within a specified search radius of the start, end or midpoint of the current fiber. The collision detection algorithm 14 identifies all possible scenarios of collisions based on the cylindrical geometry of the fibers. The algorithm sieves through most fiber pairs with a simple preliminary check. This preliminary check works by obtaining the minimum distance ( ) between axes of the fibers in question. If ≥ 2 , there is no possibility of a collision, which accounts for most fiber pairs. If < 2 , more nuanced checks are performed, the details of which can be found in Ketchel and Larochelle. 14 Excessive collisions (> 5% incidence), despite division of the domain to minimize them as illustrated in Fig. S6, would indicate sharp gradients with high frequency in the optimal orientation fields ( ( ) & ( )). This might be a result of inadequate regularization i.e. smoothing of the optimization variables, ̅ associated with orientation design variables. If inadequate regularization is not the case, a potential solution is to smoothen the optimal orientation fields ( ( ) & ( )) and repeat through previously mentioned steps in the microstructure realization flowchart (see Fig. S6). During this smoothing operation, the values of and are essentially mapped onto a suitably chosen coarser grid via averaging which then should smooth out the field by reducing the gradients. This smoothing solution, of course, comes at the cost of losing accuracy in the performance of the design which we can easily quantify via FE analysis to check if it is acceptable.

Collision Tests and Volume Fraction Verification
To verify the uniformity of fiber volume fractions, we randomly sample, within the optimal macrostructure, 500 cubical volumes whose side length is chosen based on the fiber size (e.g. 20 mm for = 40). For each cubical volume, we enumerate the total number of fibers that are fully or partially within a given cubical volume. For fibers completely within the cube, the volume is the sum of the volume of the two end caps and the cylinder in between (= 2( − 1/3) 3 ). For partial fibers, the volume is approximated as the sum of one end cap if one of the end points is in the cube and the fraction of the cylinder within the cube. The ratio of the total volume occupied by the fibers to the cube volume gives us the local fiber volume fraction. Fig. S7 shows histograms of calculated volume fractions for a 2D and a 3D structure with = 40. It can clearly be seen that we achieve a good uniform distribution of the fibers. Depending on the 3D printing technology used, it is necessary to pre-process the geometry of the structure before printing. We developed a pre-processing algorithm for voxel-based 3D printing of fiber composites and adapted it for use with Stratasys Connex3 Objet500. The principle of operation of Objet500 is multimaterial jetting which is similar to inkjet printing. Instead of placing drops of ink on a paper, it involves precisely placing layers of individual droplets of different photopolymerizable resins onto a build tray. A UV light source simultaneously cures each droplet layer as they are being placed. The printer specifically needs a set of bitmaps, one each for each material used, for each layer to be printed. The binary values (0 or 1) stored at each pixel in the bitmap associated with a material then indicates whether a droplet of that material is going to be jetted or not. The bitmaps when combined at the layer level and stacked gives the complete voxelized representation of the 3D structure to be printed.

Voxelization
We employed the voxel based modeling tool, Monolith 15,16 to obtain the voxelized representation of the composite structures obtained from the microstructure realization process. We then use the voxelized representation to generate the bitmaps necessary for voxel-based 3D printing. Fig. S8 shows the flowchart for the voxelization process. The process starts with compilation of inputs: fiber diameter, start and end points. The next step is to represent the fibers as line segments joining the start and end points and store the information in a CAD file (in Rhino's .3dm format, to be specific) needed by Monolith. Apart from this CAD file, Monolith requires the printer voxel dimensions (which are 42.33×84.66×30 μm) and a watertight mesh (called the mask mesh) of the macroscale topology obtained from the design automation step. Monolith first generates distance fields for the fibers within the mask mesh and ignores the fibers (or parts of the fibers) outside. The distance fields are gradient fields calculated based on the distance from the geometric primitive, a line segment in this case. An appropriate threshold value gives the boundary and hence the topology of the solid desired, cylinder in this case. Monolith then rasterizes the entire minimum bounding box containing the mask mesh, where rasterization refers to the process of converting the continuous representation of geometrical entities like lines, triangles, cylinders to discrete volume elements i.e., voxels. During this process, each voxel is also assigned a material based on its location. The voxels outside the mask mesh are regarded as void while the voxels inside are assigned either fiber or matrix material based on whether the voxel is within or outside the geometries generated by the distance fields. The mask mesh thus serves as a boundary and lets the rasterization algorithm ignore all entities outside this mesh. Monolith slices the rasterized entities to get the binary bitmaps needed to print with Objet500.

Simulations -Setup and Convergence
We present in this section, the details involved in the topology optimization problem setup including the finite element simulation setup. In addition, selected snapshots of the designs during optimization are presented to illustrate the convergence of designs towards the optimal structure. We divide this section into two sub-sections, one each for 2D and 3D problems respectively. In both the 2D and 3D problems, all the optimization variables (except the ones related to density) are bound to vary between 0 and 1. The lower limit of the density related optimization variables is set to 0.001 and not 0 to avoid numerical issues associated with zero stiffness finite elements. The orientation design variables, and can vary from 0 o to 180 o . We chose ℎ = 0.5 for all the results as any material with < 0.5 adds insignificantly to the overall stiffness of the structure as its material stiffness is only about 10% or less of the values of the bulk.

Optimal design of planar structures
For all the planar problems discussed in the main text, we used regular 4-noded plane stress linear quadrilateral finite elements of size, ℎ = 1.25 mm to discretize the design domain (300×200 mm rectangle). We applied a linear smoothing filter with radius 8ℎ on the optimization variables. A projection filter was applied on top of the linear filter after the first 200 steps. The value of was doubled after every 200 steps from 3.125 for reasons explained later. We discussed four different cases in the main text: Case 1 is optimization with a fixed isotropic microstructure, Case 2 is optimization of microstructure with a fixed macrostructure, Cases 3 and 4 are multiscale optimization with one and two loading scenarios respectively. The objective and constraint convergence histories for these four cases are presented in Fig. S9. The designs at specific iterations are also shown in Fig. S10, Fig. S11, Fig. S12 and Fig. S13 for Cases 1, 2, 3 and 4 respectively along with arrows that depict the optimal fiber orientations and = 0.5 iso-contour that demarcates void (isotropic matrix for Case 2) from the material.
We normalized the objective at the beginning of each simulation with the strain energy of the initial design (0 th iteration). For Case 4 with two forward analyses, we still normalized the objective but made sure that the contribution from each forward analysis towards the objective is initially equal. Likewise, we chose the initial designs such that the constraint is satisfied and started with a value close to 0. The designs evolve to what looks like the final topology quickly as seen in Fig. S10-Fig. S13 but the designs have intermediate densities that are then gradually removed by the projection filter. As mentioned in Section 2.2, the purpose of the projection filter is to map the intermediate densities to either 0 or 1. This, if done in single step with a high value, will lead to gross violation of the constraint, which might then lead the optimizer to a sub-optimal design or just slow convergence. To avoid this, we ramped the value of gradually. As mentioned earlier, we did this after every 200 steps the effect of which can clearly be seen in the objective and constraint convergence histories. Table S1 shows the stiffness values obtained from simulations and experiments (averaged) for structures optimized with setup in Case 3 and with = 10 and 40 in both bending, and tension, . The averaged stiffness values obtained from experiments for the three differently sized structures (300×200 mm, 210×120 mm and 120×80 mm) with = 10 are shown in Table S2.

Optimal design of 3D structures
Case 1 (2-fold symmetry)  We used regular 8-noded linear hexahedral elements of size, ℎ = 1.6 mm to discretize the design domain and a linear filter of radius 6ℎ for all the 3D problems discussed in the main text. In Figs. 4b-c from the main text, we showed the results of the optimization with different symmetry conditions for the macroscale topology and the microstructure. Table S3 shows the setup used for the three different cases (2-, 4-fold symmetry and the mixed case) and the results obtained. For Case 1 which is 2-fold symmetry (same as the result in Fig. 4b), we set the optimization variables such that the design variables satisfy the following conditions: Similarly, to enforce 4-fold symmetry for Case 2 (same as the result in Fig. 4c), in addition to the conditions stated above we have the following conditions that enforce designs to be symmetric about the plane bisecting xy-and xz-planes: For Case 3 (same as the result in Fig. 4d), just as before we use eq. (S13) to enforce symmetry about xyand xz-planes for , and . In addition, we enforce the condition: ( , , ) = ( , , ) which means 4-fold symmetry just as in Case 2 for the macrostructure but only 2-fold symmetry for microstructure as in Case 1. The enforced symmetries become evident in the measured mechanical response shown in Table  S3.  Table S3). Table S4 tabulates the stiffness values obtained from the simulations and the experiments for various loading directions for the results shown in Figs. 4d and 4e of the main text with the same = 0.1 but different (=10 and 40 respectively) where the formulation in Case 3 is used. Fig.  S15 and Fig. S16 show the = 0.5 iso-contour surface for different design iterations for the same. The results are very similar to the 2D case; we show one half of the structure opaque and the other half transparent with fiber orientations for visualization purposes. Fig. S17 shows the cross-sections at different locations along the length of these beams.  Table S3/result in Fig. 4b, Green Curve -Case 2 in Table S3/result in Fig. 4c, Black curve -Case 3 in Table S3/result in Fig. 4d with = 10 and Redsame as Case 3 but with = 40/result in Fig. 4e.  (Fig. 4d in main text). The fiber orientations are also shown here.  (Fig. 4e in main text). The fiber orientations are also shown here.

Material Characterization
For the experimental results shown in Fig. 1b in the main text, we printed prismatic bar samples of length 80 mm and a rectangular cross-section with dimensions 20.6x19.83 mm. Samples were printed with fiber aspect ratios α varying from 5-40, while holding the volume fraction, f, and orientation angles and fixed at 0.1, 0 o and 0 o respectively. As a result, the samples had the fibers aligned with the loading direction x x z y (x-) and we then measured the modulus along the fiber axes (Ex=(C -1 )11 -1 ). The samples were 3D printed with their long dimension (x-) aligned with the x-direction of the print tray, i.e., the direction in which the printhead moves when dispensing ink and curing the printed materials. We used the soft elastomer Tango+ for the matrix and stiff glassy polymer VeroClear to print the fibers; both materials are supplied by Stratasys Ltd in resin form. The samples were subjected to uniaxial tension/compression up to 1% strain at 1%/min strain rate. All the experiments were done with Instron 5943 mechanical testing system with a 1 kN load cell. The results are tabulated in Table S5. We used the average of the moduli obtained from tension and compression to fit the experimental results to the Mori-Tanaka homogenization theory with Em and Ef as the fitting parameters. We obtained the best fit with Em = 1.   The discrepancy between the measured values and the fitted values for the elastomeric matrix can be explained by smearing of the interface between the fiber and matrix materials. This smearing, we surmise, is the result of mixing of the matrix and fiber materials during the printing process which manifests as an increased effective matrix modulus. Fig. S18 shows the variation of the effective tension modulus, of composite bars with = − , the initial gap along the length of the fibers during the microstructure realization step while the microstructural descriptors are all kept the same ( = = 0, α = 10 and f = 0.1). We noticed that the smearing effect can only be minimized but not avoided entirely and choose to be at least 1 mm.

Validation of Optimal Structures
Tension Bending 2D Beam We also printed the optimal beam structures along the print tray's x-direction and then subjected them to mechanical testing to validate the simulation results. We show through a table of schematics in Table S6, the experimental setups used under different loading scenarios viz. tension and bending. We took advantage of the multi-material printing capabilities to print stiff supporting structures made of the Vero materiala block at the base of the beam to assist in clamping the structure and a short tab at the loading end (both shown in black in the schematics in Table S6). We fixed the base structure in a rigid frame in an appropriate orientation (vertically for tension and horizontally for bending) thus clamping the base of the beam while the end tab was attached to the loading fixture of the mechanical testing machine so that a desired load can be transmitted to the beam. The comparison between experimental and simulation results are presented in Table S1 and Table S4 for 2D and 3D structures respectively. The discrepancies between them might result from the smearing effect discussed in the previous sub-section. The material compilation process introduces a small degree of nonuniformity as evidenced in Fig. S7 leading to non-uniform spacing between the fibers which in turn leads to slight deviations from the expected behavior. In addition, the discrete nature of the voxel-based printing process leading to inaccurate representation of the fibers and printing flaws might also be contributing to the discrepancies. The inherent anisotropy induced by the Connex3 Objet500 printer as reported in the literature 19,21 might also be a contributing factor which we haven't considered in this study.