Navigating the Cognitive Maze of AI Text Imitation: A CLT-Driven Approach
Main Article Content
Abstract
This research investigates the cognitive challenges impeding AI text generation and examines strategies to enhance the naturalness, fluency, and overall quality of the generated text. A multifaceted approach was employed, involving diverse human evaluations, cognitive load measures, and evaluation metrics. The study utilized the Yelp Reviews dataset for experimentation and the "LLM - Detect AI-Generated Text" Kaggle dataset for validation. The research unravelled the intricate interplay between intrinsic, extraneous, and germane load factors that influence the effectiveness of AI text generation. Practical insights address challenges such as handling complex sentence structures, comprehending unfamiliar vocabulary, and interpreting ambiguous language. Human evaluations confirmed the model's proficiency in generating natural and fluent text, while cognitive load measures provided nuanced insights into the processing dynamics of AI-generated text. The study also demonstrated the AI Content Detection Tool's accuracy in distinguishing between human-written and AI-generated text. Implications encompass the need for continuous model refinement and adaptation to changing linguistic patterns to ensure long-term effectiveness. The findings contribute to the ongoing dialogue on the ethical and practical use of AI language models, shaping future developments in the domain.