The widespread adoption of Large Language Models (LLMs) is profoundly altering human-AI interaction, raising critical questions about their impact on online collaborative knowledge ecosystems. We empirically investigate these dynamics using a comprehensive dataset from \stackoverflow{}, the leading Q&A platform for programming and technical expertise. Employing a difference-in-differences framework and knowledge network analysis, we reveal that the influence of LLMs extends beyond simply substituting common queries. Following the public release of ChatGPT 3.5 in November 2022, while StackOverflow experienced a significant decline in overall question volume, it exhibited a disproportionate surge in questions related to “novel specializations” — previously unobserved combinations of question tags. Concurrently, the underlying knowledge network underwent a significant structural reorganization, characterized by declining efficiency (closeness centrality) and a segregation of technical discourse. Our analysis shows this novelty is not driven by new technology tags but rather by a recombination of existing niche tags. Crucially, these transformations are unique to StackOverflow, with other communities (e.g., Mathematics, Statistics Stack Exchange) showing stable traffic and network structures. These findings suggest that LLMs act as cognitive offloading tools, enabling humans to explore novel, niche problem spaces while simultaneously fostering a more fragmented and specialized knowledge landscape. This dynamic has significant implications for the future of human-AI co-evolution, online collective intelligence, and the composition of future AI training data.