Hi again, and thanks for the questions! If one is using AI to write texts that presume knowledge rooted in personal experience on one's part, then it becomes a matter of pretense. That is especially dangerous and impactful with regard to religious texts. Either one has the knowledge and experience necessary to undertake what one is writing, or one does not. Consider that all religious texts make claims about the human person and about God, or at least higher awareness and knowledge. Are these claims rooted in human experience? Christians believe in resurrection and draw conclusions from that. In fact, they create an entire vision of humanity's future with God, based on that reality. The Gospel stories make clear what an unusual kind of experience the first Christians had had, and what a singular and difficult-to-define event Jesus' resurrection was. The questions of faith include, "Did this really happen?" "Can I trust this testimony and the event it claims is real, or is it all religious or human philosophical nonsense?" In faith, our belief is rooted in human experience and in our ability to trust it. A Rule of Life, especially if one wants to use it for others, functions similarly.
You know that I admire AI and have found it really helpful in carefully limited ways. Pope Leo, it seems, has done the same. But in this area of religious belief, our experience of God, and the creation of human communities that MUST be rooted in such experience, trust, and wisdom, AI has no real place. While your conversations with AI sound similar to mine and have been inspiring and insightful beyond your own, AI is not human, it is not a person, and, as Pope Leo has said, it is "soulless". (This means it lacks the characteristics of the authentically and uniquely human person.) I have a friend, a bishop of an autocephalous Catholic Church. She uses AI and says it is the best teacher she has ever had in one area of learning. However, she has also had explicit conversations with it regarding its limits in relation to ethics. One of these is a lack of conscience; another is a sense of empathy. AI was clear that it lacked these. It noted other limitations I can't completely recall at the moment, though these had to do with a significant lack of capacity for relatedness or relationships with the users who are depending on AI. I should also note that AI has a tendency to flatter the user, and while this may not truly be dishonest in any way (it may be constructive criticism), one does need to ask AI to be honest with one in getting assessments whenever one begins to feel it is pulling punches in this regard.
So, I think it is fine to use AI for clarifying writing or points of limited understanding --- as when I am working on a chapter and have the sense that something is not working. AI can tell me what that is and why it is not working. It can also explain why something IS working and, in fact, AI is really great for that. It can also help with outlining when there is a lot of material to hold in mind. However, the writing and the experience, along with the wisdom related to these, must be my own. Otherwise, what I present as my own is simply a lie that I am surreptitiously trying to get others to trust. AI knows a lot! Tons more than I do in many ways, and it can help teach me and draw out the implications of what I write. That can give me things to research and reflect on, but it cannot replace that writing or the hard-won wisdom it nurtures and comes from.
You know that I admire AI and have found it really helpful in carefully limited ways. Pope Leo, it seems, has done the same. But in this area of religious belief, our experience of God, and the creation of human communities that MUST be rooted in such experience, trust, and wisdom, AI has no real place. While your conversations with AI sound similar to mine and have been inspiring and insightful beyond your own, AI is not human, it is not a person, and, as Pope Leo has said, it is "soulless". (This means it lacks the characteristics of the authentically and uniquely human person.) I have a friend, a bishop of an autocephalous Catholic Church. She uses AI and says it is the best teacher she has ever had in one area of learning. However, she has also had explicit conversations with it regarding its limits in relation to ethics. One of these is a lack of conscience; another is a sense of empathy. AI was clear that it lacked these. It noted other limitations I can't completely recall at the moment, though these had to do with a significant lack of capacity for relatedness or relationships with the users who are depending on AI. I should also note that AI has a tendency to flatter the user, and while this may not truly be dishonest in any way (it may be constructive criticism), one does need to ask AI to be honest with one in getting assessments whenever one begins to feel it is pulling punches in this regard.
So, I think it is fine to use AI for clarifying writing or points of limited understanding --- as when I am working on a chapter and have the sense that something is not working. AI can tell me what that is and why it is not working. It can also explain why something IS working and, in fact, AI is really great for that. It can also help with outlining when there is a lot of material to hold in mind. However, the writing and the experience, along with the wisdom related to these, must be my own. Otherwise, what I present as my own is simply a lie that I am surreptitiously trying to get others to trust. AI knows a lot! Tons more than I do in many ways, and it can help teach me and draw out the implications of what I write. That can give me things to research and reflect on, but it cannot replace that writing or the hard-won wisdom it nurtures and comes from.
In short, no, I don't use AI for my blog posts, and would never do so (or accept someone else doing so) with something like a Rule of Life. I'm afraid that would significantly destroy one's capacity for trust -- at least that would be so if one desired to pass any part of this text off as one's own work. How would I know which part is one's own work in such a case? How would I know this regarding what is one's own experience as its source or ground? How would any representative of the Church or anyone seeking to bind themselves to such a Rule know what was rooted in human experience and wisdom or not? The use of AI in news stories or pieces on famous people (Pope Leo is a significant example) has made it almost impossible to know what is genuine these days. The use of some percentage of AI in a piece of writing purporting to reflect spiritual experience and wisdom causes the entire piece and its author to become suspect. It works analogously to leaven in the OT. At Passover, the presence of leaven (a source of fermentation or decay) caused everything conceivably touched with, or affected in the way leaven affects them, to be thrown out or burned as tainted. This included several (5?) different kinds of grain, which were removed, especially if affected by moisture. That is as wise today, in these new applications to AI and what we may trust as genuinely human, as it was regarding leaven (hametz) at Passover.

