Abstract
Calibration of highly dynamic multiphysics manufacturing processes such as electrohydrodynamicsbased additive manufacturing (AM) technologies (Ejet printing) is still performed by laborintensive trialanderror practices. Such practices have hindered the broad adoption of these technologies, demanding a new paradigm of selfcalibrating Ejet printing machines. Here we develop an endtoend physicsinformed Bayesian learning framework (GPJet) which can learn the jet process dynamics with minimum experimental cost. GPJet consists of three modules: the machine vision module, the physicsbased modeling module, and the machine learning (ML) module. GPJet was tested on a virtual Ejet printing machine with inprocess jet monitoring capabilities. Our results show that the Machine Vision module can extract highfidelity jet features in realtime from video data using an automated parallelized computer vision workflow. The Machine Vision module, combined with the Physicsbased modeling module, can also act as closedloop sensory feedback to the Machine Learning module of high and lowfidelity data. This work extends the application of intelligent AM machines to more complex working conditions while reducing cost and increasing computational efficiency.
Similar content being viewed by others
Introduction
The programmable assembly of functional inks in two and threedimensions using computer numerically controlled (CNC) machines coupled with printing technologies has revolutionized the design and fabrication of physical objects. Extrusionbased additive manufacturing (AM) technologies, often referred to as direct ink writing or 3D printing, are transforming fields such as healthcare, robotics, electronics, and sustainability^{1, 2}. While the potential of 3D printing is celebrated very often in scientific journals and the media, there is a “secret” that practitioners and companies of 3D printing do not emphasize. This underreported reality entails the extensive experimentation and manual labor required for tuning process parameters that are high in number and often interdependent, to achieve process stability and reproducible outcomes^{3}. Every time a new material needs to be processed, or ambient conditions vary, practitioners follow trial and error approaches for printing process calibration. These calibration practices have led to the creation of experienced “super users” at the expense of an enormous degree of individual process engineering.
Electrohydrodynamicsbased AM technologies, also known as Ejet printing technologies, are notable examples of extrusionbased AM technologies that have been facing such challenges due to their complex multiphysics and highly dynamic nature^{4,5}. A wide variety of materials, also termed as inks, can be processed with Ejet printing technologies. Processable inks include homogenous solution (pure solvents or solubilized materials), suspensions (such as colloids of quantum dots, nanoparticles, insoluble material), melts (such as molten metal, wax, etc.), biomolecules (DNA, proteins, and bacteria) and polymers (solutions or melts). During Ejet printing, the ink is extruded through a charged needle tip towards a grounded collector. As soon as the electrostatic stresses overcome the polymer material’s viscoelastic and surface tension stresses, a conejet is formed in the free flow regime (Fig. 1a). An instabilities area, whose span size across the jet depends on the nature of the polymer (solution or melt), follows the conejet regime. Focusing on the polymer melt case, where the instabilities area is closer to the collector (Fig. 1a), a translational stage can be employed to write highresolution fibers (Fig. 1b), a process known as melt electrowriting (MEW). With this capability, MEW has been established as an emerging highresolution AM technology for fabricating architected biomaterial scaffolds, opening new tissue engineering avenues. MEW has undergone ten years of process optimization studies since its first inception in the literature^{6,7}. Tunable fiber diameter and patterning fidelity are critical scaffold attributes for biological outcomes and efficacy. These can be optimized by tuning five interdependent usercontrolled process parameters assuming stable ambient environmental conditions (temperature and humidity): (a) the applied voltage at the needle tip, (b) the extrusion volumetric flowrate, (c) the temperature at the syringe, (d) the collector speed, and (e) needle tip to collector distance. Considering the dynamic range of each process variable in combination with the highly sensitive spatial and time scales of the process in the micron range, one quickly realizes why it took 10 years for process optimization with the vast majority of these studies using one specific material i.e., polycaprolactone (PCL).
Earlier studies achieved printing fidelity with MEW using an approach based on intuition, i.e., manually selecting values for the critical process parameters, performing postprinting fidelity measurements, assessing trends and patterns in data, and selecting process parameter settings for followup experimentation. Later studies focused on understanding the previously identified printing regimes with respect to the physics and the dynamics of the process^{8,9,10}. A recent study systematically approached the calibration process by exploring the parameter space using a DesignofExperiments approach in a simple Cartesian grid defined by the number of independent process parameters^{11}. In this study, computer vision was employed to image the jet in the freeflow regime as a function of various process parameter conditions in a high throughput manner^{11}. The generated dataset was then assessed offline to identify high fidelity printability regimes^{11}. However, selecting an exploration strategy implies picking a resolution without knowing the model function. To address that, the resolution is often chosen high, aiming for an exhaustive search to avoid inaccuracies. With the high dimensionality of the parameter space, this brute force data collection method quickly fails to explore the space efficiently and becomes prone to bias.
The challenges mentioned before, combined with the demand for increasingly complex and reproducible products, warrant a new paradigm for Ejet printing machines. In this paradigm, rigid machines calibrated by trialanderror practices are replaced by “intelligent” autonomous machines capable of adapting and learning process dynamics with minimum experimental cost. Artificial intelligence and machine learning (ML) are transforming many areas of experimental science in this direction. However, advances in manufacturing science are mainly driven by expensive physicsbased simulations that cannot resolve all scales and, more recently, by datahungry neural networks trained offline with inprocess monitoring datasets for defect detection and process performance prediction on various AM platform technologies^{12}.
To address these challenges, we adopt an approach inspired by the operating principles behind autonomous materials experimentation platforms, also known as research robots^{13,14,15} and from the field of physicsinformed machine learning^{16,17}. Research robots demonstrate closedloop control through online learning from prior experiments, planning and execution of new experiments. Physicsinformed machine learning lays the foundations for integrating data with domain knowledge in the form of mathematical models to allow efficient simulations of highly multiphysics phenomena. The underlying framework of research robots provides a systematic datadriven approach for the identification of the best followup experiments to optimize unknown functions. The functions are approximated by Gaussian Process Regression (GPR), which is a robust statistical, nonparametric technique both for function approximation and uncertainty quantification^{18,19}. During the Bayesian optimization loop, an acquisition function balances the utilization of experiments that explore the unknown function with experiments that exploit prior knowledge by considering the quantified uncertainty after each function approximation step^{20}. Efficiency with respect to the utilization of experimental resources could be further improved by augmenting the surrogate model with prior domain knowledge following a multifidelity modeling approach^{21,22,23}. The success of this approach has been documented in the field of computational science by using simple and potentially inaccurate models that carry a low computational cost to achieve predictive accuracy on a small set of highfidelity observations obtained from accurate models that carry a high computational cost.
Automated materials experimentation systems driven by Bayesian optimization active learning frameworks have demonstrated remarkable performance in autonomously searching the vast synthesisprocessstructureproperty landscape resulting to the accelerated discovery of advanced materials for a wide variety of applications^{20,24,25,26,27,28} including AM^{29,30}. However, the application of autonomic principles for the calibration of AM processes remains underexploited. In one study concerning Ejet printing of substrates with micronscale topographical features, the authors demonstrated a research robot, whose planner is informed by an inline nanosurface metrology tool and actively learns to tune the extrusion rate until it achieves a predefined topographical feature^{31}. In another study about direct ink writing of paste materials, the authors demonstrated an autonomous 3D printer whose planner is informed by machine vision cameras and adaptively searches the space of four process parameters to print single struts with geometrical features that match userdefined specifications^{32}.
In this paper, we employ principles from autonomous research robots to develop an endtoend physicsinformed probabilistic machine learning framework that sets the basis for the next generation of selfcalibrating Ejet printing machines. Such a framework should allow both online extraction of jet features from inprocess monitoring data and online robust modeling of process signature dynamics using the extracted data in the most computationally efficient way. Thus, we have followed a datacentric approach that leverages data of multiple fidelities from experiments and physicsdomain knowledge, to demonstrate the utility of the framework both in an offline but also in an online process calibration scenario.
To accomplish that, we construct a virtual MEW machine using a previously published video dataset acquired by a conventional camera that performs in situ jet monitoring under various process conditions, and we demonstrate that our datadriven framework called GPJet is capable of:

highfidelity jet feature extraction in realtime from video data using a parallelized computer vision algorithmic workflow that is systematically profiled under various implementations,

lowfidelity jet feature extraction from “cheap” physicsbased models describing the evolution of the jet across the freeflow regime and the deposition dynamics of a gravitydriven viscous thread onto a moving surface known as the “fluidmechanical sewing machine.”

With these capabilities, we demonstrate that GPJet is a robust multifidelity modeling framework that can learn the process dynamics with minimum experimental cost as described by the required number of highfidelity data.
Our results are supported by performance tests comparing offline and online calibration scenarios revealing that the online ML planner, based on an active learning approach that balances exploration and exploitation, can effectively learn the jet evolution in the freeflow regime much more efficiently when it is informed by physics and based on that to adaptively tune the translational speed of the collector for minimum jet lag distance. In that case, the ML planner follows a decisionmaking strategy revealing the universality of the fluid mechanical sewing machine model in predicting the deposition dynamics of any printing process of viscous jets no matter what the nature of the jet driving force is.
Results
GPJet: the physicsinformed machine learning pipeline
To demonstrate the ability of learning the dynamics of EJet printing processes in a datadriven fashion, we employ a pipelinebased approach that is depicted in Fig. 2. The approach is composed of three modules, namely: the machine vision module, the physicsbased modeling module, and the machine learning module. In GPJet, features that are representative of the printing process dynamics, are extracted by the machine vision module and the physicsbased modeling module. In the context of this paper, highfidelity observations are referred to the jet features extracted experimentally, and lowfidelity observations are referred to the same jet features as predicted by a lowcost numerical model that is a good approximation of the reality.
As a first step, jet features are engineered and extracted in realtime using an algorithmic computer vision workflow taking as an input timeseries video data (see Methods for details). The Machine Vision module allows us to probe and measure the jet dynamics, a capability hereafter denoted as jet metrology. The jet metrology serves as a feature extraction step of highfidelity observations corresponding to the jet radius profile (\({R}_{j}\) [mm]) and the jet lag distance (\({L}_{j}\) [mm]), which are then fed into the Machine Learning module that can perform various Bayesianbased batch and online learning tasks (see Methods for details). The Machine Learning module can be further informed by lowfidelity observations, a capability hereafter denoted as Multifidelity modeling. The lowfidelity observations are obtained by the Physicsbased modeling module and correspond to the same engineered features that are extracted experimentally by the Machine Vision module (\({R}_{j}\) [mm] and (\({L}_{j}\) [mm]).
Collectively, the GPJet pipeline offers a range of unique capabilities ranging from realtime feature extraction using computer vision to physicsinformed machine learning capabilities that aim to minimize experimental cost without sacrificing accuracy and robustness.
Dataset
To demonstrate the utility and performance of the GPJet pipeline, we curated a dataset that emulates a virtual Ejet printing machine with a dynamic range of 12 usercontrolled machine settings. The dataset is depicted in Table S1 and is created based on previously published timeseries video data^{10}. Specifically, the raw data is acquired by a conventional camera with 50 fps and a field of view spanning the area between the needle tip and the grounded collector of a melt electrowriting (MEW) system. A detailed explanation of the raw data, the preprocessing procedure derive the final curated dataset can be found in Supplementary Note. MEW constitutes an ideal testbed for demonstrating the capabilities and the flexibility of our GPJet framework. The highly dynamic nature of the process and the multiple usercontrolled independent process parameters, pose several challenges that we demonstrate both in an offline and an online selfcalibrating machine scenario.
Learning jet dynamics from videos
As a first goal we set out to tackle the challenge of realtime process monitoring and jet metrology. To demonstrate the highly dynamic nature of the process, we plot overlaid video frames showing the jet hitting a stationary collector (Fig. 3a). We chose to plot frames with a time step equal to 0.2 s since the electrostatic nature of the process and the viscoelasticity of the molten jet cause instabilities of a smaller time scale (~0.02 s) and result in jet topologies that are indistinguishable with a naked eye. This number provided a starting point for setting a goal related to the computational efficiency of the machine vision module for realtime performance. Since the camera acquisition time was equal to 0.02 s (50 fps), we proceeded with the goal to maintain computational processing time equal or smaller than that.
To accomplish this, we started by dividing the computer vision workflow in specific algorithmic tasks and implemented a sequential code version. We continued by systematically profiling the code, identifying the computationally expensive tasks, and then gradually parallelizing the code to reduce computational processing time. This approach led to three different code implementations of the machine vision module: (a) the sequential, (b) the concurrent and the (c) parallel, with the last one achieving realtime performance. The results of the profiling experiments are shown in Fig. 3b, where all the tasks are plotted along with their respective processing time for the three different code implementations.
Specifically, the machine vision tasks per frame are the following:
Task 1: Read new video frame.
Task 2: Process the frame to reverse background color.
Task 3: Edgebased feature extraction and data storage.
Task 4: Objectbased feature extraction and data storage.
Task 5: Show processed video output.
Task 6: Save video output.
Profiling the sequential code version reveals that an average time of 0.033 s. is needed to perform the whole machine vision workflow per frame with the most expensive task being the one that performs edgebased feature extraction across the jet length (Fig. 3c). To alleviate this source of computational cost, we employed a multithreading strategy for the concurrent code version that led to a modest improvement of 0.005 s.
Multithreading is implementing software to perform two or more tasks in a concurrent manner within the same application. Multithreading employs multiple threads to perform each task with no limitation in the number of threads that can be used^{10}. We learned that multithreading on one hand can reduce processing time of I/O bound tasks almost to zero, but on the other hand does not improve processing time of Central Processing Unit (CPU) bound tasks, such as Task 3 and Task 4, which are the most expensive.
To further reduce processing time, we augmented the concurrent version with a multiprocessing strategy that led to the parallel code version. Multiprocessing systems have multiple processors running at the same time. Therefore, different tasks of an application can be run in different processors in a parallel manner. This capability considerably accelerates program performance. The limitation of this strategy is related to the fact that the number of processes that can be employed must be less or equal to the number of processors (CPU cores) of the device^{10}. Finally, by employing multithreading for I/O bound tasks (Task 1, Task 5, and Task 6) and multiprocessing for CPU bound tasks (Task 3, Task 4), we were able to achieve realtime process monitoring and jet metrology with processing time up to 0.014 s.
Instrumented with the capability to perform jet feature extraction in realtime, we then focused on quantifying process dynamics relevant features. With the edgebased feature extraction algorithm, which is described in detail in Learning Jet Dynamics from Videos & Physics under the Methods section, we were able to measure the jet diameter profile, the area of the whole jet, the angle between the vertical line that connects the nozzle tip with the collector, and different points across the length of the jet profile and finally the translational jet speed at different points across the length of the jet profile. The high content spatiotemporal results are plotted in Fig. S1 of the Supplementary Information demonstrating the breadth of information of the machine vision module and the fact that the jet point right above the collector undergoes a highly fluctuating behavior that will directly affect the printing quality.
We present the jet metrology results for two distinct phases during the printing process in Fig. 4aiii and Fig. 4biii focusing on the jet point right above the collector, hereafter denoted as point of interest. With the objectbased feature extraction algorithm which is described in detail in subsection 4.1 under the Methods section, we were able to detect key objects in the field of view such the needle tip, the Taylor cone, which is defined as the jet area between the needle tip outlet and the jet point 2*Ro away from the needle tip, the remaining jet, and the collector. In this way, we were able to measure the Lag distance, defined as the distance between the point of interest and the projection of the middle point of the nozzle tip outlet to the collector. All detected objects are denoted graphically in Fig. 3d, which shows the video output after Task 4 during the computer vision workflow.
As a next step, we asked how we could leverage the extracted features to learn the dynamics of the process in the most efficient datadriven way, with respect to both experimental and computational cost. To address this question, we developed several Bayesian learning techniques, hereafter denoted as the Machine Learning module of the GPJet framework. The Machine Learning module takes as input the extracted highfidelity data and initially uses Gaussian Processes (GPs) to approximate the function describing the relationship between (a) the jet radius profile and the nozzle tip to collector distance and (b) the Lag distance and the ratio of the collector speed over the jet speed at the point of interest.
Gaussian process regression (GPR) is a robust statistical, non parametric technique for function approximation with kernel machines. GPR provides the important advantages of uncertainty quantification, the ability to perform well with small datasets and the capability to easily include domainaware physicsbased models in the deployed kernels.
To learn how the jet radius profile evolves over the tip to collector distance, we chose radial basis functions (RBF) as the kernel approximator and performed GPR. We trained the model under two different scenarios with n = 5 observations and n = 10 observations chosen at equally spaced points along the jet length for the 1^{st} and 2^{nd} scenario, respectively. It is important to mention that the machine vision module provides n = 93 observations along the jet length. The results are shown in Fig. 5a, b for the two different training scenarios. GPs can approximate the jet radius profile evolution with just n = 10 observations showcasing the efficiency of our datadriven approach with respect to computational cost.
To learn the function describing the relationship between the Lag distance and the ratio of the collector speed over the jet speed at the point of interest, we employ the same modeling strategy as before. Similarly, we set up two different training scenarios with n = 4 observations and n = 12 observations, respectively. Please note here that the number of highfidelity observations at our disposal is constrained by our previously published experimental dataset (see Machine vision module under the Methods section), where videos were acquired only at 12 different speed ratio settings. The results are shown in Fig. 5c, d for the two different training scenarios. While in the 1^{st} training scenario, GPR provides a smooth function approximation, the prediction’s error from the experimental ground truth quantified by the Root Mean Square Error (RMSE), is significantly higher compared to the 2^{nd} training scenario (see Fig. S3b–d in Supplementary Information). As a result, the function describing the relationship under question is hard to approximate due to the limited available dataset that we used to test our framework. Specifically, the dataset is nonuniform across the space of the tested independent process parameters (ratio of the collector speed over the jet speed) leaving us with no data at certain regions of the space (see Fig. 5d).
Collectively, our machine vision module informing the GPR capabilities of the machine learning module with highfidelity observations demonstrates that we can learn the dynamics of the process. Specifically, GPJet demonstrates excellent performance with respect to the prediction of jet radius profile evolution for a small amount of highfidelity observations n = 10. Furthermore, GPJet demonstrates very good performance for the available number of highfidelity observations with respect to the Lag distance behavior at different collector speed settings.
Learning jet dynamics from videos & physics
As a next step, we focused on exploring how we could further reduce the number of highfidelity observations without losing the predictive capability of GPR with respect to the jet radius profile evolution. To accomplish that, we augmented the highfidelity observations obtained by the machine vision module with lowfidelity observations obtained in a principled manner by a multiphysics model. The multiphysics model captures the electrohydrodynamics, the heat transfer and viscoelastic constitutive material behavior of the molten jet in 1D across the needle tip to collector distance. The mathematical formulation and numeric implementation of the model are described in detail in subsection Machine learning module under the Methods sections.
We set up our datadriven scheme with two fidelities corresponding to two different kernel machines integrated in one multifidelity kernel, in which the correlation between the two kernels is encoded as a linear relationship. In other words, we constrain the prior knowledge during GPR with physicsrelevant knowledge, resulting to a physicsinformed posterior prediction that requires much less highfidelity observations.
We trained the multifidelity model under two different scenarios with n = 6 highfidelity observations and n = 7 highfidelity observations, respectively. For both scenarios the number of lowfidelity observations was kept to a number equal to 32 and equally spaced points across the jet length. For the 1st scenario n = 6 equally spaced points were chosen across the jet length depicted in the jet schematic of Fig. 6a (upper left). The results are shown in Fig. 6ai and Fig. 6aii. In Fig. 6ai, we plot the multifidelity GPR predictions for the low and highfidelity observations respectively. In Fig. 6aii, we plot the predictions of the multifidelity GPR in highfidelity observations together with the predictions of a simple GP in highfidelity observations. Both plots demonstrate that we can learn the jet radius profile much better using two different fidelities compared to using only one fidelity for the same number of highfidelity observations. Our results, point out that we lose predictive accuracy for the Taylor cone area (below the needle tip outlet). This phenomenon was expected due to that the fact that similar behavior was observed when the multiphysics model was tested and informed the strategy of the 2^{nd} scenario, where we chose 7 highfidelity observations with the additional point being in the Taylor cone area. The results are shown in Fig. 6bi and Fig. 6bii demonstrating that we have managed to further reduce the required number of highfidelity observations that need to be extracted by the machine vision module without compromising the predictive accuracy.
Active learning of jet dynamics
Up to now, we demonstrated that GPJet, is a robust tool for passive learning of jet dynamics. By “passive”, we mean that given a highfidelity dataset provided by the Machine Vision module and augmented by lowfidelity data provided by the Physicsbased module, the GPR capabilities of the Machine Learning module can model the function that mathematically represents the relation between the jet radius and the needle tip to collector distance. In addition to that, we employed the same strategy without low fidelity data, to model the function describing the highly dynamic relationship between the Lag distance and the ratio of the collector speed and the jet velocity at the point of interest.
In this section, we asked the questions of whether we could actively choose data points across jet length for which to observe the outputs to accurately model the underlying function describing the jet dynamics with respect to the extracted jet features. To accomplish that, we deploy a virtual MEW machine, whose dynamic range is defined by the available dataset, and we run simulation experiments to demonstrate if we can learn the underlying functions in an active manner as quickly and accurately as possible.
To accomplish that, we set up an exploration scenario, a setup closely related to optimal experimental design scenarios as it equates to adaptively selecting the input spatial points across the jet length based on what is already known about the function describing the jet radius profile and where knowledge can be improved. We run active learning in both the multifidelity GP and simple GP for the jet radius profile evolution. The results are shown in Fig. 7. To systematically, compare the performance of the two different models, we chose the same initial training points (Fig. S5ai and S5bi) and the same number of iterations during each training phase. For each iteration (Fig. S5a(ivi) and S5b(i–vi)), we graphically show, on the processed video frame the adaptively selected point across the jet length and below that the modeling results. The adaptive selection is based on a purely exploratory acquisition function that steers the point selection towards the area of least knowledge quantified by the uncertainty output of the modeling step. The results demonstrate that we can learn actively and in a purely exploratory scenario accurately and fast the underlying function. Each iteration phase for the multifidelity (MFD) GPs and simple GPs lasts around ~0.5 s leading to a total learning time equal to 3 s. Lastly, we extract performance metrics to compare the active learning between the multifidelity and simple GP model (see Fig. S2a–c in Supplementary Information). The results demonstrate that active learning on the MFD model is significantly faster (Fig. 2a–c) with more confident predictions since the model’s prior assumptions are constrained by domainaware data.
Then, we employ the same strategy to actively learn the function describing the relation between the Lag distance and the speed ratio (put symbol) in an exploration scenario. The results are shown in Fig. 8. The virtual MEW machine performs remarkably well in the prescribed experimental simulation. It starts by randomly selecting one speed ratio equal to 5 (see Fig. 8a) and after 4 additional iterations (see Fig. 8a–d), the underlying function is quite effectively approximated. Performance metrics (see Fig. S3b–d and Fig. S4 in Supplementary Information) demonstrate that the underlying function can be learned fast in an active manner and provide predictions with higher confidence compared to the passive learning approach and specifically after training the GP with all the available highfidelity observations.
Finally, we set out to address the following question. Can the virtual MEW machine find the speed ratio corresponding to the minimum Lag distance in an autonomous way? Autonomy in this paper, refers to the machine’s ability to selfdrive measurements of an experiment. Some initial parameters, such as the parameters to explore and their corresponding ranges constrained by the dataset, is defined by the user a priori. Instead of us learning the relation between the Lag distance and the speed ratio and afterwards calibrating the machine hyperparameters, we aim to demonstrate a selfcalibrating scenario. To achieve that we employ an exploitationexploration strategy in the spirit of Bayesian Optimization (BO). It is called exploration–exploitation as scenarios where the output of the underlying function must be optimized require us to both sample uncertain areas to acquire more knowledge about the function (exploration) as well as sampling input points that are likely to produce extremum outputs given the current knowledge of the function (exploitation). The virtual MEW machine performs remarkably well in the prescribed experimental simulation. It starts again by randomly selecting a speed ratio equal to (see Fig. 9a) and after 2 additional iterations (see Fig. 9a–c) the speed ratio corresponding to the minimum Lag distance has been reached. This speed ratio is close to 1, as expected from the mechanical sewing machine model, which is described in detail in Physicsbased modeling module under the Methods section. BO validates the initial hypothesis formed by universality about the mechanical sewing machine model.
Conclusions
In this work, we demonstrate GPJet, an endtoend physicsinformed probabilistic machine learning framework that sets the basis for the next generation of selfcalibrating Ejet printing machines. We construct a virtual melt electrowriting (MEW) machine using a previously published video dataset acquired by a conventional camera that performed online jet monitoring under various process conditions. We demonstrate that GPJet can extract highfidelity jet features in near real time from the video data using a highly efficient computer vision algorithmic workflow that is implemented in a hybrid multiprocessing—multithreading approach. Additionally, two physicsbased models were implemented, providing efficiently prior process physics knowledge, in the form of lowfidelity data. The first one can predict the evolution of the jet across the freeflow regime while the second one can predict the deposition dynamics of a gravitydriven viscous thread onto a slowly moving surface known as the “fluidmechanical sewing machine”. Furthermore, we set out to learn process dynamics with minimum experimental cost, as described by the required number of highfidelity data. To accomplish that, a probabilistic machine learning module was developed based on Gaussian process regression (GPR) as the surrogate modeling step, active learning for pure process dynamics exploration and Bayesian optimization for process optimization. Two case studies were performed, one regarding the jet diameter profile and the other one regarding the lag distance. Our results demonstrate that for an offline learning strategy, the number of data and their respective position in the design space are crucial for the quality and the confidence of the predictions in both cases. Also, in the case of jet radius profile, a multifidelity GPR modeling approach coupling highfidelity data from the machine vision module, with lowfidelity data from the physicsbased jet evolution model, can provide better and more confident predictions, while using less highfidelity observations. Incorporating prior physics knowledge leads to computational cost reduction, since jet diameter needs to be evaluated in less points across the nozzle to bed distance, and thus to even faster video processing times during the highfidelity feature extraction step. As a next step, an online learning strategy was employed to actively learn the jet diameter profile with and without multifidelity modeling. Importantly, we demonstrate that we can effectively learn jet evolution more accurately in the online learning compared to the offline learning scenario when it is informed by physics guided by the variance. Finally, in an online calibration scenario, the Optimizer managed to minimize the lag distance, by finetuning the collector’s speed.
GPJet serves as an important step towards autonomous selfcalibrating Ejet printing processes by integrating machine learning models that offer (a) uncertainty quantification for decision making after the modeling step and (b) lower fidelity physicsbased models for higher computational efficiency during online deployment. It is important to recognize the current limitations of GPJet and the challenges that we are trying to overcome with our ongoing work. In this study, we are bounded by the previously published video dataset that we used to test our framework. We are building our own physical automated manufacturing system. This will allow us to perform selfcalibration experiments by setting the machine to be guided by GPJet and actively search for jet stability conditions with prescribed fiber diameter values over the whole dynamic range of each independent process parameter. Furthermore, the updating of the Machine Vision module with more robust algorithms in the future is imperative for generalized use by the whole family of Ejet printing technologies. For example, the adoption of additional functions will be adopted to allow detection of the transition from the nozzle diameter to the jet diameter, providing differentiation of the two features and tracking of the jet as it moves toward/away from the collector. We plan to include these updates in GPJet to explore its robustness beyond steady state printing conditions including the transient behavior of the jet during the initial jet formation, where we expect unseen jet instability phenomena, such as fiber breakage and beads across the jet length. GPJet can be easily integrated and guide any physical Ejet printing machine using a bidirectional network communication protocol. The jet features extracted by the Machine Vision module between each experimental run, are fed to the Machine Learning module that gives as an output a set of instructions containing the values of the recommended independent process parameters that are then fed to the machine’s control platform through the serial port. Lastly, the generalization of Gaussian processes beyond their training data given the uncertainty property rests entirely on the choice of kernel that shapes our prior belief. Incorporating priorphysics knowledge allowed us to choose radial basis functions, whose exponential nature correlated well with our physicsbased model. Despite the limits of the available dataset, we have demonstrated the utility of GPJet as an automated online calibration tool that is powered by processrelevant data of multiple fidelities presenting a large step toward the autonomy of ejet printing.
Methods
Machine vision module
Jet metrology
For the implementation of the Jet Metrology algorithm, Python 3.8 was used, along with the python bindings of the OpenCV library, which enables us to read and process video data. The jet metrology algorithm consists of two subalgorithms. The first is the object segmentation and detection algorithm. The second is the feature extraction algorithm.
The first subalgorithm segments the needle tip, the Taylor cone, the jet and the deposited fiber ‘on the collector. In addition to that, the algorithm attempts to find the jet’s deposition point on the collector. Finally, the segmented objects of interest are plotted for the user to visually inspect the output and assess the performance of the algorithm. To detect the objects of interest in each video frame we use the very much alike meanshift^{31} and camshift^{32} algorithms.
The meanshift algorithm is based on a statistical concept directly related to clustering. Similar to other clustering algorithms, the meanshift algorithm scans the whole frame for high concentration of pixels of the same color. The main difference between the meanshift and the camshift algorithms is that the camshift algorithm has the capability to adjust, so that the tracking box can change its size and direction, to better correlate with the movements of the tracked object. The meanshift and camshift algorithm are useful tools to employ for object tracking. Also, unlike neural networks and other machine learning methods for object detection, these algorithms can be immediately implemented and deployed unsupervised, i.e., without the need to train a model with numerous labeled images. Instead, the algorithm takes as an input the initial color of the object, that needs to be detected, and then it tracks it throughout the rest of the video. On the other hand, using color as a primary method of identification, neither of the two algorithms can identify objects based on specific shapes and features, which makes them less powerful than other methods. Furthermore, objects varying in color on a large scale and complex or noisy backgrounds can make object detection and tracking problematic. As a result, the meanshift and camshift algorithms work best under controlled environments.
The first step is to reverse the image colors so that the objects of interest are white and the background black. The next step is to apply a multicolor mask to segment them, and then to change the image colorspace from Blue, Green, Red (BGR) to Hue, Saturation, Value (HSV). Finally, the meanshift algorithm is applied to detect the needle and the Taylor cone and the camshift algorithm to detect and track the jet.
To find the deposition point, the algorithm needs to know the collector’s position. Then, it creates a window around the collector, crops the region of interest from the frame and processes that instead of the whole frame. The builtin function used to find the deposition point is the cv2.goodFeaturesToTrack. This function finds the most prominent corner in our region of interest by calculating its eigenvalues, as described in^{33}.
Finally, by subtracting the deposition point from the nozzle’s position (center of blue rectangle in Fig. 3c), we get the lag distance, which is depicted with a twoway orange arrow in Fig. 3c.
The second subalgorithm is the one responsible for extracting all the jet features that are relevant to the process dynamics. These features are the diameter, areas, and angles of the jet as we move along the zaxis. Another important feature is the velocity of each jet’s point along the xaxis relatively to the nozzle’s position. To get all those features we follow a straightforward procedure. The algorithm takes three inputs, the first is the current video frame. The second input is the calibration factor (\({cf}\)), which is a correlation between distance units (mm) and pixels. The last one is the stride. The stride indicates every how many pixels along the zaxis we perform computations. Using too small a stride would lead to more precise calculations but would tremendously increase the computation time. On the other hand, using too large a stride would lead to shorter computation times but at a risk to lose important information.
The first step is to change the frame’s colorspace from RGB scale to grayscale, so that the Canny edge detection algorithm^{34} can be applied. The parameters of the Canny edge detector are [threshold_1, threshold_2] and were specified to 150 and 255 in a semiautomatic way, using trackbars while performing edge detection to other video samples. After performing Canny edge detection, we read the first row of pixels in our canny frame, which now is an array of 0 and 255. If Canny algorithm has been implemented correctly when we read this row of pixels from left to right, the first time we encounter a 255 should be the left edge (\({le}\)) of our jet. Likewise, the first time we encounter a 255 while reading the row of pixels from right to the left, should be the right edge (\({re}\)) of our jet. By subtracting those two pixels’ indices and multiplying with the calibration factor we get the diameter of the jet at this position in the zaxis, which is equal to \(2\,{R}_{j}\):
Those indexes are also stored in two variables (\(l{e}_{{previous}},{r}{e}_{{previous}}\)) so that they can be used to calculate the jet angles as we move down the zaxis. Then we repeat the procedure for every ‘stride’ rows. After finding the left (\({le}\)) and right (\({re}\)) edges and calculating the diameter, the area and angles can be calculated as:
The \(l{e}_{{previous}},{r}{e}_{{previous}}\) are then updated with the \({le},{re}\) values. After accessing all frame’s rows, the algorithm returns arrays containing all the quantified diameters, areas, right boundaries, angles left and angles right. The same procedure is applied to all frames. Right boundaries are important because by subtracting the right edges of two consecutive frames we can calculate the jet’s velocity \(({U}_{j})\) on the xaxis.
Physicsbased modeling module
Multiphysics model
The importance of accurately extracting jet properties is signified by several studies on predicting the jet stable region diameter, through mathematical modeling. Zhmayev et al. proposed a model by fully coupling the conservation of mass, momentum, charge and energy equations with a constitutive model and the electric field equations at the steady state^{33}. Similar to most models, they utilize the thin filament approximation to obtain a simpler and more tractable solution. This assumption is possible by appropriately averaging the model variables across the radial direction. In addition, the charge and electric field equations are simplified, under the assumption of low electrical conductivity, as compared to the governing equations for isothermal simulations presented by Carroll and Joo^{34}. The conservation of energy relation and a nonisothermal constitutive model were added to extend to nonisothermal situations. The resulting governing equations after being nondimensionalized are as follows (see Table S5 in Supplementary Information):
The system of equations can be reduced to a set of five coupled first order ordinary differential equations (ODEs). Boundary Conditions are required, in order to proceed towards the numerical solution.
The model was implemented in Python. While true properties and parameters of the material are not provided the ones used in ref. ^{13} for PCL were used and are presented in Tables S2 and S3. As also referred in refs. ^{12,13}^{,} the model slightly underpredicts the jet radius while in the Taylor cone area, but when the jet is stabilized, it accurately predicts it’s radius. Knowing this, even if the volumetric flowrate (\(Q\)) is not provided with the dataset, a Particle Swarm Optimization (PSO) algorithm was also implemented to find the \(Q\) for which the predicted jet’s radius better fits the computer vision observations.
Geometrical model
Lag distance is a highly important parameter regarding the quality of the process outcome. Specifically, for some collector speeds, the jet falls onto the moving collector in a way reminiscent of a sewing machine, generating a rich variety of periodic patterns, such as meanders, W patterns, alternating loops and translated coiling (see Fig. 9d). Brun et al.^{35} proposed a quasistatic geometrical model, consisting of three coupled ordinary differential equations for the radial deflection, the orientation, and the curvature of the path of the jet’s contact point with the collector, capable of reconstructing the patterns observed experimentally while successfully calculated the bifurcation threshold of different patterns. They also evidenced that the jet/collector velocity ratio (\({U}_{c}/{V}_{{jm}}\)) was the key factor for pattern variation.
According to this geometrical model, the deposited trace on the collector is a combination of the obit of the contact point (when collector’s speed is equal to zero \({U}_{c}=0,\) the jet creates coiling patters with radius \({R}_{c}\)) and the movement of the collector.
where \(q(s,t)\) is the deposited trace, s is the arclength, t is time, \(r(s)\) is the contact point at time s/V_{jm}, e_{x} is the direction of the collector’s speed, \(ts/{{V}_{jm}}\) is the time that the contact point moves together with the collector. Differentiating \(q\left(s,t\right)\) and moving from Cartesian to Polar coordinates (\(r,\psi\) denote the polar coordinates of the contact point \(r(s)\)), and considering the curvature \({\theta }^{{\prime} }\) at the bottom of the jet, we get the system of ODEs:
This geometrical model was implemented in Python and by varying the dimensionless parameter \({U}_{c}/{V}_{{jm}}\) from 0 to 1 as suggested^{30}, the orbit and the deposited trace can be reconstructed. Verifying the results from^{30}, the critical velocity at which the straight pattern appears is \({U}_{c}={V}_{{jm}}\), which means \({U}_{c}/{V}_{{jm}}=1\). for speed ratios \({0 < U}_{c}/{V}_{{jm}} < 1\) the process is highly unstable, forming the translated coiling, alternating loops, W patterns and meanders when the speed ratios are 0.23, 0.48, 0.64, 0.83, respectively.
Machine learning module
Gaussian process regression
Gaussian process regression is a nonparametric stochastic process with strong probabilistic establishment^{35}. GPR is a supervised machine learning technique, which predicts a probability distribution based on Bayesian theory unlike other machine learning algorithms that give deterministic predictions. The idea behind GPR is that the posterior probability can be modified based on a prior probability, given a new observation. Those characteristics allow the uncertainty quantification of each point prediction. Assuming there is a dataset available, consisting of inputoutput pairs of observations \(D=\left\{{x}_{i},{y}_{i}\right\}=\left(x,y\right),{i}=1,\,2,\,\ldots ,{n}\) that are generated by an unknown model function \(f\)
\(f\left(x\right)\) can be completely estimated by a mean \(m\left(x\right)\) and a covariance function \(K\left(x,{x{{\hbox{'}}}}\right).\)
GPR aims to learn the mapping between the set of input variables and the unknown model f(x), given the set of observations D. To map this correlation f(x) is typically assigned a GP prior.
Gaussian processes (GPs) are powerful modeling frameworks incorporating a variety of kernels. A Gaussian Process is a collection of random variables, any finite number of which have a joint Gaussian distribution^{35}.
where \(k\) is a kernel function with a set of trainable hyperparameters \(\theta\). The kernel defines a symmetricpositive covariance matrix \({K}_{{ij}}=k({x}_{i},{x}_{j};\theta ),\,{ K}\epsilon {R}^{{nxn}}\), which reflects the prior available knowledge on the function to be approximated. Furthermore, kernel’s eigenvalues define a reproducing kernel Hilbert space, that determines the class of functions within approximation capacity of the predictive GP posterior mean. Hyperparameters \(\theta\) are trained by maximizing the marginal loglikelihood of the model^{35}.
Assuming a Gaussian likelihood and using the Sherman–Morrison–Woodbury formula the expression for the posterior distribution \(p({fy},X)\) is tractable and can be used to perform prediction given a new output \({f}_{n+1}\) for a new input \({x}_{n+1}\).
where \({k}_{n+1}=\left[k\left({x}_{n+1},\,{x}_{1}\right),\,\ldots ,{k}\left({x}_{n+1},{x}_{n}\right)\right]\). As referenced before prediction consists of a mean, computed using the posterior mean \({\mu }_{* }\), and an uncertainty term, computed using the posterior variance \({\sigma }_{* }^{2}\).
Multifidelity modeling
The GPR framework, presented above, can be extended to construct probabilistic models able to consider numerous information sources of different fidelity levels^{24}. Supposing that s levels of information source are available, the input, output data pairs can be organized by increasing fidelity as \({D}_{t}=\left\{{x}_{t},\,{y}_{t}\right\},{t}=1,\,2,\ldots ,{s}\). So, \({y}_{s}\) denotes the output of the most accurate and expensive to evaluate model, while \({y}_{1}\) denotes the output of the cheapest and least accurate model to evaluate. Assuming that only two models are available, a highfidelity model and a low fidelity model, the highfidelity model can be defined as a scaled sum of the low fidelity model plus an error term:
where \(\rho\) is a scaling constant quantifying the correlation between the two models and \({f}_{{err}}(x)\) denotes another GP which models the error.
A numerically efficient recursive inference scheme can then be constructed, by replacing the GP prior \({f}_{{low}}(x)\) with the GP posterior \({f}_{{lo}{w}_{{n}_{{low}}+1}}\left(x\right)\) of the previous inference level, while assuming that the corresponding experimental design sets {D_{1}, D_{2}, …, D_{s}} have a nested structure. This implies that the training inputs of higher fidelity model needs to be a subset of the training inputs of the low fidelity model. This scheme is matching totally the Gaussian posterior distribution predicted by the fully coupled scheme, only now the inference problem is decoupled into two GPR problems, yielding the multifidelity posterior distribution \(p\left({y}_{{high}},\,{X}_{{high}},{f}_{{lo}{w}_{{n}_{{low}}+1}}\right)\) with a predictive mean and variance at each level^{18}.
where \({n}_{{high}},\,{n}_{{low}}\) denote the number of training points from the high and low fidelity models, respectively.
Active learning
Let’s assume again that \(n\) observations are available \(\left\{{x}_{i},\,{y}_{i}\right\},{i}=1,\,\ldots ,{n}\) where \({y}_{i}=f\left({x}_{i}\right)\) and the next point to be evaluated \(({x}_{n+1},\,{y}_{n+1})\) needs to be considered. The question that arises is if there is a more informed way to pick those points when evaluation is expensive to perform, rather than random picking.
This is achieved through an acquisition function \(u(\cdot )\). The role of the acquisition function is to guide the search for the optimum. They are defined in a way such that high acquisition values correspond to a potential optimum of the unknown model \(f\), large prediction uncertainty or a combination of those. Maximizing the acquisition function is used to select the next point to evaluate the function at. Consequently, the goal is to sample \(f\) sequentially at \({argma}{x}_{x}u({xD})\).
Every acquisition function depends on \(\mu ,\,{\sigma }^{2}\) or a combination of both. The scale at which it depends on each one of those defines the explorationexploitation tradeoff. When exploring, points where the GP variance is large should be chosen. When exploiting, points where the GP mean is closest to the extremum should be chosen. Many acquisition functions are available, some of them are presented in Table S4.
After sampling \({x}_{n+1}\) and evaluating \({f}_{n+1}\), GP regression is performed to fit to the new point as well. Then the process repeats itself until termination criteria are met, such as a maximum number of iterations, a minimum or maximum value is reached, or uncertainty is below an allowed value.
Data availability
The datasets generated during and/or analyzed during the current study are available in the Github repository, https://github.com/superlabsgr/gpjet.
Code availability
The source code of the GPJet framework is available to the readers through this public GitHub repository: https://github.com/superlabsgr/gpjet.
References
Truby, R. L. & Lewis, J. A. Printing soft matter in three dimensions. Nature 540, 371–378 (2016).
Lewis, J. A. & Ahn, B. Y. Threedimensional printed electronics. Nature 518, 42–43 (2015).
Goh, G. D. et al. Process–structure–properties in polymer additive manufacturing via material extrusion: a review. Crit. Rev. Solid State 45, 1–21 (2019).
Park, J.U. et al. Highresolution electrohydrodynamic jet printing. Nat. Mater. 6, 782–789 (2007).
Onses, M. S., Sutanto, E., Ferreira, P. M., Alleyne, A. G. & Rogers, J. A. Mechanisms, capabilities, and applications of high‐resolution electrohydrodynamic jet printing. Small 11, 4237–4266 (2015).
Brown, T. D., Dalton, P. D. & Hutmacher, D. W. Direct writing by way of melt electrospinning. Adv. Mater. 23, 5651–5657 (2011).
Robinson, T. M., Hutmacher, D. W. & Dalton, P. D. The next frontier in melt electrospinning: taming the jet. Adv. Funct. Mater. 29, 1904664 (2019).
Tourlomousis, F., Ding, H., Kalyon, D. M. & Chang, R. C. Melt electrospinning writing process guided by a printability number. J. Manuf. Sci. Eng. 139, 081004 (2017).
Hochleitner, G. et al. Fibre pulsing during melt electrospinning writing. Bionanomaterials 17, 159–171 (2016).
Hrynevich, A., Liashenko, I. & Dalton, P. D. Accurate prediction of melt electrowritten laydown patterns from simple geometrical considerations. Adv. Mater. Technol. 5, 2000772 (2020).
Wunner, F. M. et al. Printomics: the highthroughput analysis of printing parameters applied to melt electrowriting. Biofabrication 11, 025004 (2019).
Qin, J. et al. Research and application of machine learning for additive manufacturing. Addit. Manuf. 52, 102691 (2022).
Stach, E. et al. Autonomous experimentation systems for materials development: a community perspective. Matter 4, 2702–2726 (2021).
Fan, D. et al. A robotic Intelligent Towing Tank for learning complex fluidstructure dynamics. Sci. Robotics 4, eaay5063 (2019).
King, R. D. et al. The automation of science. Science 324, 85–89 (2009).
Karniadakis, G. E. et al. Physicsinformed machine learning. Nat. Rev. Phys. 3, 422–440 (2021).
Meng, X., Wang, Z., Fan, D., Triantafyllou, M. S. & Karniadakis, G. E. A fast multifidelity method with uncertainty quantification for complex data correlations: application to vortexinduced vibrations of marine risers. Comput. Method Appl. M 386, 114212 (2021).
Forrester, D. A. I. J., Sóbester, D. A. & Keane, P. A. J. Constructing a Surrogate. in Engineering Design via Surrogate Modelling: A Practical Guide, 33–76 (John Wiley & Sons, Ltd). (2008)
Rasmussen, C. E. & Williams, C. K. I. Gaussian Processes for Machine Learning. (MIT, Cambridge, Massachusetts, 2006).
Kusne, A. G. et al. Onthefly closedloop materials discovery via Bayesian active learning. Nat. Commun. 11, 5966 (2020).
Perdikaris, P., Raissi, M., Damianou, A., Lawrence, N. D. & Karniadakis, G. E. Nonlinear information fusion algorithms for dataefficient multifidelity modelling. Proc. R. Soc. Math Phys. Eng. Sci. 473, 20160751 (2017).
Babaee, H., Perdikaris, P., Chryssostomidis, C. & Karniadakis, G. E. Multifidelity modelling of mixed convection based on experimental correlations and numerical simulations. J. Fluid Mech. 809, 895–917 (2016).
Parussini, L., Venturi, D., Perdikaris, P. & Karniadakis, G. E. Multifidelity Gaussian process regression for prediction of random fields. J. Comput. Phys. 336, 36–50 (2017).
Noack, M. M. et al. Gaussian processes for autonomous data acquisition at largescale synchrotron and neutron facilities. Nat. Rev. Phys. 3, 685–697 (2021).
Tabor, D. P. et al. Accelerating the discovery of materials for clean energy in the era of smart automation. Nat. Rev. Mater. 3, 5–20 (2018).
Reyes, K. G. & Maruyama, B. The machine learning revolution in materials? MRS Bull. 44, 530–537 (2019).
Nikolaev, P. et al. Autonomy in materials research: a case study in carbon nanotube growth. Npj Comput. Mater. 2, 16031 (2016).
SaeidiJavash, M. et al. Machine learningassisted ultrafast flash sintering of highperformance and flexible silver–selenide thermoelectric devices. Energ. Environ. Sci. 15, 5093–5104 (2022).
Gongora, A. E. et al. A Bayesian experimental autonomous researcher for mechanical design. Sci. Adv. 6, eaaz1708 (2020).
Erps, T. et al. Accelerated discovery of 3D printing materials using datadriven multiobjective optimization. Sci. Adv. 7, eabf7435 (2021).
Wang, Z., Pannier, C. P., Barton, K. & Hoelzle, D. J. Application of robust monotonically convergent spatial iterative learning control to microscale additive manufacturing. Mechatronics 56, 157–165 (2018).
Deneault, J. R. et al. Toward autonomous additive manufacturing: Bayesian optimization on a 3D printer. MRS Bull 46, 566–575 (2021).
Zhmayev, E., Zhou, H. & Joo, Y. L. Modeling of nonisothermal polymer jets in melt electrospinning. J. Nonnewton Fluid 153, 95–108 (2008).
Carroll, C. P. & Joo, Y. L. Electrospinning of viscoelastic Boger fluids: modeling and experiments. Phys. Fluids 18, 053102 (2006).
Brun, P.T., Audoly, B., Ribe, N. M., Eaves, T. S. & Lister, J. R. Liquid ropes: a geometrical model for thin viscous jet instabilities. Phys. Rev. Lett. 114, 174501 (2015).
Acknowledgements
This work was supported by Superlabs Consortia funding and NCSR Demokritos funding from the Next Generation EU plan and specifically, Greece 2.0, the National Resilience and Recovery Fund for Research and Innovation, Grant #5179171 under the auspices of the General Secretariat for Research and Innovation.
Author information
Authors and Affiliations
Contributions
A.O. designed the experiments and analyzed the data, developed the code for the project and wrote the manuscript. T.L. supervised the entire project and wrote the manuscript. D.F. wrote and reviewed the manuscript. A.G. reviewed the manuscript. G.N. wrote and reviewed the manuscript. S.C. wrote and reviewed the manuscript. F.T. designed and supervised the entire project and wrote the manuscript.
Corresponding author
Ethics declarations
Competing interests
Alysia Garmulewicz is the Director of Materiom C.I.C. Filippos Tourlomousis is the Director of Superlabs AMKE and holds shares in Biological Lattice Industries, Corp.
Peer review
Peer review information
Communications Engineering thanks Angelo Hawa and the other, anonymous, reviewers for their contribution to the peer review of this work. Primary Handling Editors: Miranda Vinay, Mengying Su and Rosamund Daw.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Oikonomou, A., Loutas, T., Fan, D. et al. PhysicsInformed Bayesian learning of electrohydrodynamic polymer jet printing dynamics. Commun Eng 2, 20 (2023). https://doi.org/10.1038/s44172023000690
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s44172023000690
This article is cited by

Computational ElectroHydroDynamics in microsystems: A Review of Challenges and Applications
Archives of Computational Methods in Engineering (2024)