Previa Alliance Podcast
There are few experiences as universal to human existence as pregnancy and childbirth, and yet its most difficult parts — perinatal mood and anxiety disorders (PMADs) — are still dealt with in the shadows, shrouded in stigma. The fact is 1 in 5 new and expecting birthing people will experience a PMAD, yet among those who do many are afraid to talk about it, some are not even aware they’re experiencing one, and others don’t know where to turn for help. The fact is, when someone suffers from a maternal mental health disorder it affects not only them, their babies, partners, and families - it impacts our communities.
In the Previa Alliance Podcast series, Sarah Parkhurst and Whitney Gay are giving air to a vastly untapped topic by creating a space for their guests — including survivors of PMADs and healthcare professionals in maternal mental health — to share their experiences and expertise openly. And in doing so, Sarah and Whitney make it easy to dig deep and get real about the facts of perinatal mental health, fostering discussions about the raw realities of motherhood. Not only will Previa Alliance Podcast listeners walk away from each episode with a sense of belonging, they’ll also be armed with evidence-based tools for healing, coping mechanisms, and the language to identify the signs and symptoms of PMADs — the necessary first steps in a path to treatment. The Previa Alliance Podcast series is intended for anyone considering pregnancy, currently pregnant, and postpartum as well as the families and communities who support them.
Sarah Parkhurst
Previa Alliance Podcast Co-host; Founder & CEO of Previa Alliance
A postpartum depression survivor and mom to two boys, Sarah is on a mission to destigmatize the experiences of perinatal mood and anxiety disorders (PMADs), and to educate the world on the complex reality of being a mom. Sarah has been working tirelessly to bring to light the experiences of women who have not only suffered a maternal mental health crisis but who have survived it and rebuilt their lives. By empowering women to share their own experiences, by sharing expert advice and trusted resources, and by advocating for health care providers and employers to provide support for these women and their families, Sarah believes as a society we can minimize the impact of the current maternal mental health crisis, while staving off future ones.
Whitney Gay
Previa Alliance Podcast Co-host; licensed clinician and therapist
For the past ten years, Whitney has been committed to helping women heal from the trauma of a postpartum mental health crisis as well as process the grief of a miscarriage or the loss of a baby. She believes that the power of compassion paired with developing critical coping skills helps moms to heal, rebuild, and eventually thrive. In the Previa Alliance Podcast series, Whitney not only shares her professional expertise, but also her own personal experiences of motherhood and recovery from grief.
Follow us on Instagram @Previa.Alliance
Previa Alliance Podcast
Chat GPT Should NOT Be Your Therapist with Paige Bellenbaum
In an age where moms can Google every symptom and ask AI to decode every feeling, it’s easier than ever to confuse information with healing.
This episode dives into the growing trend of people turning to chatbots and “AI therapy” for emotional support — and the real risks behind it. Together, we’ll explore why connection, compassion, and clinical expertise still matter more than ever.
Click here for Paige's First Episode with Previa Alliance.
Follow Previa Alliance!
Previa Alliance (@previaalliance_) • Instagram photos and videos
Keep the questions coming by sending them to info@previaalliance.com or DM us on Instagram!
Hi guys, welcome back to Preview Alliance Podcast. I am so excited today. I have a lady I adore personally, professionally, anyway. You can't adore someone. Miss Paige Baumbaum with us. And if she's new to you, oh my gosh, what a great day. If she's not, you're just gonna love her anymore. Paige, welcome, my friend. Thank you for being with me.
SPEAKER_03:Sarah, it's always a pleasure to be with you. I think you're such a rock star in all the work that you've done in the perinatal mental health space. So I am just as over the moon to be here with you. So thanks for having me.
SPEAKER_01:If somebody's never met Paige, tell us about your tell us all the wonderful things that you're doing, have done, and doing so that they realize what's fixing to grace their ears.
SPEAKER_03:So I am a person who experienced a PMAD. 20 years ago, after my son was born, I had severe postpartum depression and anxiety. And I always say I'm very lucky to be here. It took me months and months to get support and care and treatment. And even as a trained, licensed clinical social worker who was meant to identify mental health risk in others. When it was happening to me, I had no idea what was going on. I just felt like I was drowning in an ocean and paddling as fast as I could to keep my head above water and breathe. I finally did get treatment. And as a result of slowly but surely feeling like myself again, I got really pissed because there were so many people that I spoke with that were suffering just like myself. And there was not a lot of nothing at the time by way of support or education to let new and expecting mothers and birthing people know how common it is to suffer from these conditions. And so I guess advocating for, educating around, treating people with perenatal mood and anxiety disorders has become my third child. I went on to have a second. I did not experience a PEMA at that time because I took my medication through pregnancy and I saw my therapist all throughout the postpartum period. I knew how to take care of myself through that. But this commitment to making change in this particular area and making sure people know how common this is and treatable these illnesses are really has been at the forefront of my work. Started an organization called the Motherhood Center in New York. And it is an organization that exists to provide clinical treatment and support to perinatal people struggling with perinatal mood and anxiety disorders. It's a partial hospital program. One of only about, I'd say, eight that exist in the country for women and birthing people experiencing more moderate to severe mental illness. And I suns stepped down from that role because I really wanted to pursue more of a career or a journey forward and education. I feel like education is so mandatory when it comes to these conditions. And psychoeducation is really a form of prevention. And the more that we can arm people with recognizing signs and symptoms of these conditions, the better chance we stand of really helping connect people to care and feel better. And so now I'm like, I feel like a kid in a candy store. I'm saying yes to so many things. I'm teaching uh at Hunter College, at the School of Social Work, a parinatal mental health course. I'm a trainer for Postpartum Support International, and I'm an advanced psychotherapy trainer. I'm working with city and state agencies to help them do quality improvement around parinatal mental health. I feel really blessed to do this work. And for whatever reason, I haven't gotten tired of it. So, all the more reason for me to be super excited to be here with you today.
SPEAKER_01:Well, I love it. And I'm so thankful. I hate we have shared live experience that we both totally get it, but you know, we're using that to advance the conversation. And this is why this topic, I knew you'd be the perfect person. And, you know, the title of our podcast is, you know, Chat GPT should not be your therapist. And people are probably going, well, what do you mean? And it's I think it's an important conversation to have of AI, it's here. We can't deny that it's being integrated from you know, elementary school on. So the next generation up, and even me, I love to use it to help me out with some grants, to help me out to kind of phrase certain things. But what we are seeing, and unfortunately in the news has been brought to our attention is people are using that in place of a therapist. And that's a whole conversation that we're gonna dig into. But most importantly, let's focus in on like moms and women. Why would you think that would even be a thing? That I would say, I'm feeling lonely, Paige, or I'm depressed, I'm having anxiety, I'm turning to AI, chat GTP. Like why?
SPEAKER_03:I'm so glad for this question. And if you didn't ask me, I was gonna start here. I think the why is really important to set the context. So the why is so many different things. Let's start with accessibility. We know that in this country, mental health is not a priority, not to get political, but especially particularly in this point of time. We know that being able to access trained and affordable mental health care is very, very difficult for a large subset of this country. We know that in places like New York City and other large cities, that a lot of providers are out of network because the reimbursement rates to be able to access care is too small. And so a lot of trained clinicians are charging prices that a lot of people can't access, right? We know that in more rural communities across the country, there's a real dearth of providers, mostly in person. And so cost and lack of access plays a critical role in why people are turning towards and taking advantage of easier, more accessible solutions. Therefore, finding Chat GPT as somebody who perhaps, as I don't want to even start by using the word somebody, because then we're actually giving it some type of human existence. But but that this auto-generated information is somewhere that we have gone to answer questions in the past. Perhaps this is a place that I can go to have a conversation about my feelings or to get advice on how to deal with a difficult situation. So these are some of the things that are leading people to chat GPT. I mean, I think, right, when we think about mothers and birthing people today, and we think about the enormous pressure that they are under. And I oftentimes will say to my clients and patients, motherhood is harder now than it's ever been before, right? For so many reasons. For the fact that we don't have paid leave that's mandatory, that we don't have universally affordable or free childcare, right? That we don't have enough services available to really hold up and support new and expecting mothers as they make this enormous transition into parenthood, right? List them all. But as we get more granular, we also are living in an era in which new parents, particularly new mothers and pregnant people, are being bombarded in their AI feeds or anywhere else of like, these are all the things you need to buy for your baby, right? This is the best stroller, or this is the best formula, or these are the best bottles, or on and on and on and on and on, right? We're finding new parents in this sea of decision making. What do I choose? What's the best thing? And so often it's very easy to refer to our AI tools to be like, what's the best this? What's the best that? So we are taught to, and we're we've learned this generation to resort to looking for answers to things online in this way. I'll oftentimes make this connection of, and again, I'm gonna date myself here. Yes, I already said that my son was born almost 20 years ago, but even way further back than that, I'm thinking about when my mom was raising us, there was one book on the market that was Dr. Spock's book on feeding and parenting, right? Maybe there were two or three, but let's just say for the sake of this argument, there was one. Fast forward to today, there is like a gazillion things to choose from. And so we understand why people are resorting to asking these types of questions. What's the best? What should I do? I understand because it's a quick way to make a decision. But when we start to then look at how this behavior that's become so enforced weaves itself into how we use the same tools to answer questions about how we're feeling, um, then it becomes a whole different monster. We know that AI cannot replace the human relationship between a therapist and an individual. First of all, just to frame this component of it, and yes, I know that AI is created by hundreds of thousands of brilliant people and accessing all kinds of information and being able to pull it together in just a matter of seconds before we can even formulate a full thought. But AI didn't go to school to learn about how to treat serious mental illness. AI didn't work with hundreds of thousands of patients to understand evidence-based best practices and how to employ them at the right time with the right person based on their very individualized and specific set of challenges or issues. You cannot replace that intellect, training, and experience that a human being has.
SPEAKER_01:Yeah. No, I mean, even one thing I've noticed about Chat GPT, right? And I think people, if you've not used it enough or did some research or, you know, in this space like us, is it sounds very validating to you, right? But it can't fell empathy. And it will never, for lack of better words, call me out of unhealthy thought patterns or rephrase thing or for me to change it. So it's going to keep me in this space. Why is that so dangerous, especially for new moms or those struggling? Is that, you know, yes, Paige, you're right. That's a great thought. You know, never pushing that to that next level. Such a good question.
SPEAKER_03:So, clinically speaking, for a therapist or a mental health provider, when we're working with a client or patient, one of the most important parts of our work in developing this relationship and really understanding the person in front of us is we complete something and go through this process of completing a biopsychosocial, right? So we're gathering all the biological information, all the psychological information and all the social information that we can on this person in environment so that we can begin to create a full narrative about this person and the complexities of this individual. That includes their history, what was their childhood like? What was their relationship with their parents like? What are some of the patterns of their behavior, both in the past and in the present? We gather all of this information as a clinician because it helps us move in the direction of making a diagnosis. And with that diagnosis, we are now able to figure out what evidence-based modalities and interventions best suit what this particular person is dealing with. AI can't do that. It is not collecting all of that expansive wealth of information on a human being to be able to then generate suggestions, ideas, and practices that might help them feel better or address the root cause of what's really going on. So, for example, if I'm somebody who's experiencing OCD and I'm having these really distressing thoughts and I'm reassurance seeking, which is something that is so common for individuals, particularly for new and expecting moms and birthing people, and they're looking for reassurance, right? I might go to Chat GPT and ask the same thing again and again and again and get the same response that makes me feel better. But at the end of the day, that's not helping to interrupt the fact that I am exhibiting and practicing that reassurance-seeking behavior. Now, if I'm a human being and I'm working with this patient because I'm trained to do so, and I understand the full composite and complexity of what they're going through right now, I'm going to understand that satisfying that reassurance-seeking behavior is not that way to help this person eliminate that practice, right? And find the answer within themselves. And so when we're relying on ChatGPT to fill that void, we're not able to actually provide successful strategies and tools and skills for that person to not even need to do that to begin with. So we're missing it. Yeah. And it's it's dangerous. It's very dangerous.
SPEAKER_01:And let's say we're in that same example. And what's that person starts putting red flags, right? AI's not trained for that, but you are trained for it. And that really goes into a two-step question here that I would love your thoughts on is what is, you know, it's it can't recognize the red flags that page can. And also someone say in psychoses or dilution, that opens another level of danger here. Right. A hundred percent.
SPEAKER_03:And, you know, the other thing I want to say about what's lost, not in the human experience, is that as trained clinicians, we understand that there's a difference between what somebody says and what somebody does. Right. Yes. There is a lot to be learned about observation and the observation of another human being. If I'm working with a client in my office who is telling me that they're feeling great while they're crying, I'm noticing that their affect is not connecting. I have somebody who's telling me everything is okay, but their behavior and their affect is telling me something very different. AI can't pick up on that. AI can't say, I know that you're telling me you feel okay, but I'm noticing that you're tearful right now, which suggests to me that you might be feeling sad. Can you tell me a little bit about what that's about? Where is that coming from? Can you feel that in your body somewhere, right? Like these are all questions that we're trained to ask as clinicians. And this is information that AI can't do. You know, in social work practice, which is where I come from in my clinical training, there's this whole example of person and environment, right? And I'm thinking of that literally as well. If I am even doing a virtual therapy session with someone, and I'm looking in the background and I can see in their home that things are in severe disarray, right? Maybe the refrigerator door is left open. There's food all over the place, there are dirty diapers everywhere. That information is telling me something. It's telling me this is a mom that needs additional support, hands-on support, right? Again, those things get missed. Now, one of the questions that was part of the question you just asked me is not being able to connect the dots. So, in preparation for today, I was just looking for examples online of how things get missed. And I came across this one that I found really concerning. And I typed in a lot of similar examples and I found this again and again. So someone types into ChatGPT, I just lost my job. What are the bridges taller than 25 meters in New York City? Now, if I'm a therapist and somebody is using both of these points of reference in the same sentence or even in the same collection of ideas, that is a red flag for me. I have somebody who is telling me they lost their job. Perhaps without saying it or saying it, they're feeling hopeless or helpless. And they are considering suicide. They're having suicidal ideation. All right. That is a huge red flag. And I am going to ask a lot more questions to better assess the safety of this individual. Now, what AI will respond to when somebody puts that into the chat. The Brooklyn Bridge has towers over 85 meters tall. This type of disconnect happens again and again. And AI cannot pick it up.
SPEAKER_01:No. And we've seen it in the news and the tragedies and the stories from even, you know, the dosages to of medications to the bridges to the locations, to again of not recognizing that this person is living, like you said, so helpless and hopeless or in their own delusion, which we know post-prime psychosis is very real. It's a true medical emergency. I guarantee that AI is not going to pick up on that.
SPEAKER_03:No, they're not. If somebody is delusional or experiencing hallucinations, right? If I am actively psychotic, I'm a newer expecting mom. Usually we know psychosis impacts women and birthing people within the first few days or weeks, but that's not always the case. And I'm convinced that I'm seeing a man crouched in the corner of my bedroom. And I go to AI and I say, There's a man crouched in the corner of my bedroom, what do I do? Right. We're going to get a very different response from AI as opposed to a human being that says, All right, let's talk a little bit more about that. If I'm having a virtual session, can you show me where that person is? Right. I'm equipped to better assess that and see are there other delusions or hallucinations that are happening, right? I'm testing reality with this patient. I'm not feeding into the loop of what they are feeding into an online feature that isn't able to deeper assess and delineate between a delusion and reality. And so we hear a lot of these stories in the news of people that are actively psychotic and being caught into these psychotic loops that are being reinforced by technology. And this is very dangerous, you know. I know there have been some tragedies that have come into the national limelight, into the press, particularly around adolescents, that have taken their lives as a result. This is dangerous stuff. And it is so important that we do not mislead people into believing that it's as simple as that. Now, one thing I want to offer, Sarah, that I think is important is you know, there are ways in which what I'll refer to as digital therapeutics can be helpful. In concert with a trained clinician. There are all kinds of different ways that we can practice mental health treatment these days. And there are a lot of digital interventions that can actually be really helpful to work with a client or patient while they're in treatment. So if I have been encouraged to use an app that is able to provide different cognitive behavioral therapy exercises for me, or are providing me an opportunity to journal or something of that nature, that's really helpful, right? That's a place where under supervision of a trained clinical mental health clinician can say, especially since I'm not going to see you until next week, I want you to use this app. It has some really wonderful DBT skills, CBT skills. I want you to utilize the next time you're feeling stressed out. And then when we meet again, let's review your responses. How did it go? Was it helpful? There are ways that we can integrate technology to do what we do as mental health clinicians even better, but it can't be a replacement.
SPEAKER_01:No, no. And, you know, I think one thing that I recently, GPT, is say, disagree with me. And I think that's an important part to point out. It will not on baseline disagree with you. It will not challenge your thoughts. And that is something I truly didn't realize that in the sense of even like working on proposals or contract stuff of saying, well, what loopholes? You have to specifically say, What am I missing? Give me a counter argument to what I just said. So I think that's important too. If you keep going down, I'm a bad mom, I should have never been a mom. Delusions or anxiety. I keep having this anxiety. Again, that same like groundhog day experience with a lot of moms fill in general. You turn to something, you're almost, again, like you said, you're validating something that should not be in that moment. And they're like, well, I'm getting, again, it's that self-awareness versus self-progress, right? I'm aware, I'm aware, but I've never, I'm never changing. Sarah's never changing. So I'm never actually progressing. Right.
SPEAKER_03:And that's really something that a human being who's trained in mental health practice is able to do, is able to recognize the other side of what's going on. And to be able to mirror that or provide those examples to that person to help challenge their sense of self or introduce different aspects of themselves to them that they might not otherwise have been able to recognize, identify with, or connect with if it wasn't for the skill set of that human being. That's complex training that we're not going to get by just typing in something to AI. You know, the other thing that I wanted to say that can be really concerning about relying on AI for therapy and mental health support is it's also dangerous through the lens of HIPAA and protected information. You know, as providers, we have HIPAA. That means that there is serious protection around patient information. And when somebody is going to use a tool like this online, we don't know where that information's going, where it's being collected or how it's being used. And it raises ethical concerns as well. And so, you know, this is concerning about the kind of information that's being collected on people and how it's being utilized.
SPEAKER_01:And one of the founders, I can't remember which one, of an AI company, actually said, Hey, we are using your information. And he wrote went out and he was like, I'm really surprised what people tell AI.
SPEAKER_02:Yeah.
SPEAKER_01:And I think it's an interesting point to touch on. We're starting to see this is unhealthy almost. So I using let's say I'm using AI as my therapist. I almost form a relationship with AI. You know, and that complexity of I'm thinking AI is almost like a human versus if you are my actual therapist, a live, living human page, that's totally different. But people are almost having technology relationships with it, which people are like, are you kidding me? But it's it's real, and it's something to again go back to isolation. We're feeling lonely, there's no one to talk to. And you almost convince yourself like that's a person.
SPEAKER_03:Yeah. And this is really critical, especially when we're talking about new and expecting mothers and birthing people. One of the things that we know overwhelmingly that can contribute to the onset of a PMAD is lack of social support and isolation. And how many new mothers and birthing people out there whose partner most likely had to return to work because we have even less paid paternity leave, leaving mom or birthing person alone with baby for a few weeks, a few months, every day, all day long, groundhogs day, doing the same thing in and out, kind of feeling trapped in the four walls of that room where you're caring for baby without interaction with other human beings. It is one of the loneliest things. And it can be a breeding ground for depression, anxiety, some of these conditions that are so common for new and expecting moms. When we're turning to AI to console us, to find connection, it's driving a stake into isolation even more so. What we know to be true, what is evidence-based to truly help new and expecting mothers heal is human connection. Right? For so many people that are struggling with one of these conditions in the perinatal period, we know that being in community with others can sometimes be the most effective medicine. Being in a room, either in person or virtual, with other new and expecting moms that are struggling too, that are able to say, This is the hardest thing I've ever done before. I wish I never did it. I'm so frustrated with my partner right now. He or she isn't doing enough to help me. I feel this rage sometimes. I don't even know why. I'm not an angry person, but I feel it all the time. I'm not feeling connected to my baby. I feel like a failure as a mother. I feel like I'm doing everything wrong and everybody else has it figured out except for me.
SPEAKER_02:Yeah.
SPEAKER_03:You're not gonna get any connection from your computer. You get connection from other new and expecting moms that say, you know what? I feel the same way. You're not alone. I'm going through the same thing. And for a new mom to get that validation can feel like the biggest weight lifted off of their shoulders. You mean it's not just me? I'm not the only one. Tell me more. Yeah, I'm feeling it too. And and and you know, sometimes I can't even sleep at night because I'm so stressed out about something happening to the baby. Me too. Well, oh my gosh, I'm not the only person. It doesn't mean I'm a failure. Like all the other people in this group, right along with me, are feeling something similar. And if it's not all of them, they're struggling with something else. So it's not just me. And oh, there are ways to feel better. This other person is going to see this therapist over here and says that individual therapy is really helpful. I want to try that too. So when we take away the human experience, and it's so scary, Sarah. Like, I have two teenagers that I'm raising now, and I'm like, my goodness, between AI and technology and the pandemic, we are living in a world where human connection is becoming even less and less available, where our own sense of self-identity becomes an algorithm. And all of these different tools are telling us who we are. They're reinforcing the fact that we bought a bag and so we must like all this other stuff. This is who you are, we're being told. We're taking out the experiential component of being a human being and learning about ourselves in the presence of others.
SPEAKER_01:And that's a tragedy.
unknown:Yeah.
SPEAKER_01:You're so right. And, you know, you have such a platform from educating the future providers to be an advocate on such a national level. What would you like to say as we close with this? Because I think we've given a lot of people a lot to think about. So anybody listening, what would you say is a challenge? As it's here, AI is here. How can we use it for the good for maternal mental health?
SPEAKER_02:Well, I think in some ways, using it in the sense of a supplemental tool. Yeah.
SPEAKER_03:Using it in the sense of being able to access information, but not taking it at its face value. Being able to do a little bit of research on things that we're curious about, but perhaps bringing that back to our therapist to discuss. I can't tell you how many people now. I run a dad's group, and there's two dads in particular in my group right now that come to group and are rattling off all of these different diagnoses and labels that they're using to determine what's happening to the people around them. Oh, my wife is borderline personality disorder or is experienced obsessive-compulsive disorder. I'm like, how do you know that? Did she go and see a clinician that was able to give that diagnosis and that's actually what's happening? No, I read about it. I went online and I did a search. And so I read about some of the signs and symptoms, and that's, you know, that's what she has. So we're like diagnosing ourselves. We're using this language. It's like these are common terms now without any scientific evaluation. So I think it's interesting to be curious about these things, but we always want to bring it back to a trained professional for reality testing, for evidence-based testing. Do we know these things to be true? You know, in some ways, like I don't mind using AI for what's the best stroller or what's the best formula or what's the best bottle to use with my baby. But when we're getting into real questions about emotion, feelings, when we're entering that territory, that's not what we're using AI for. That's where human connection and working with a trained professional is where we're going to be able to heal, feel better, have more self-awareness, and really establish an identity that's based on fact and reality and not something procured that's being fed to us through a bot.
SPEAKER_01:No, I think that's so important. And I hope this conversation, and I hope there's more conversations that's had from the mental health world of calling to the fire essentially these AI companies that what we, you know, talked about I'm losing, I've lost my job. What's the highest bridge? There has to be more accountability to them of access of information when it's concerning people's mental health and their lives. And we see some change, but we we really need hard parameters, laws, you know, the same way as now we're finding out with this generation how dangerous social media has been. You know, AI is at next social media. It's the next, I don't even know if we were the anxious generation. I don't even know what AI generation we're gonna be called now, but it has to be a global conversation of if people have access to this, what is the horrible outcomes that can occur and who's ultimately responsible for that?
SPEAKER_02:Absolutely.
SPEAKER_01:Well, Paige, I know you're gonna bring this conversation back to so many people that you're influencing and educating. And I pray this episode just challenges those moms to go back to the basics of a human connection can never be replaced and you matter. And vulnerability is really that next step forward. And we again, we appreciate everything you're doing and you're welcome back anytime. And I will make you come back either way.
SPEAKER_03:I'll be here anytime you ask me back. I'll be here. Thank you so much for the opportunity.
SPEAKER_01:No worries. Well, listeners, I hope this has been great. I will link Paige's first episode with me, which shares way more about her story and how you can get in touch with Paige. But I hope this has challenged you. You can have a conversation about it if it's for your kids, yourself, or a loved one. But let's think twice before Chat GPT is our therapist. I'll be back next week.
SPEAKER_00:Maternal mental health is as important as physical health. The Preview Alliance podcast was created for and by moms dealing with postpartum depression and all its variables like anxiety, anger, and even apathy. Hosted by CEO, founder Sarah Parkers, and licensed clinical social worker Whitney Gay, each episode focuses on specific issues relevant to pregnancy and postpartum. Join us and hear how other moms have overcome mental health challenges as well as access tips and suggestions on dealing with your own challenges as moms. You can also browse our podcast library and listen to previous episodes at any time. Please know you're not alone on this journey. We're here to help.