The Center for Literate Values ~ Defending the Western tradition of responsible individualism, disciplined freedom, tasteful creativity, common sense, and faith in a supreme moral being.
P R A E S I D I U M
A Common-Sense Journal of Literary and Cultural Analysis
10.3 (Summer 2010)
courtesy of artrenewal.org
Narrative Haeretics: One Key to Understanding an Artfully Made Story (Part One)
John R. Harris
I. The Failure of “Literary Criticism”
What passes for literary scholarship these days is generally of two houses: historical or theoretical. Historical research assumes that authors reflect the views and values of their time and are impacted by events in a classic stimulus-response behaviorism. As a rule, studies of more ancient authors adhere more narrowly to history, for such figures recede farther from our own prejudices and customs into a misty past wherein any bell or beacon is a welcome reference. Vergil’s Aeneid is viewed (quite unconsciously, I think—for the full Implications of historical criticism are never scrutinized) as supporting the political program of Augustus automatically. The emperor paid the poet a stipend, so the poet paid the emperor homage: QED. Yet the suggestion that F. Scott Fitzgerald or Kurt Vonnegut was propping up the dominant political forces of his day would be dismissed with contempt. Of course, the dismissal of such a tight causality in the work of modern artists is entirely justified. Authors before the eighteenth century were largely dependent upon some degree of patronage from people in high places. One may just as well observe, however, that authors in the industrialized West have proved heavily dependent either upon a) the affection of the reading masses, or b) the coddling of an academic/journalistic elite.
How, then, are Fitzgerald and Vonnegut significantly freer to speak their mind than Vergil? If a Joseph Conrad could speak his mind so cannily that the masses could savor his yarns without the elite’s being readily able to divine his philosophy, why might not Vergil have done the same? A creative genius rarely sees things as does the man on the streets, yet may well describe what he sees in a manner that draws a crowd. Why do we not extend this same generosity of conception to the past?
Artists, literary and otherwise, are visionaries. They perceive reality through metaphor: they see a fiery falling star where the rest of us see only another factory’s smokestack or a dark prince’s rise to power. To be sure, smokestacks and princes are historical in quality: how to study Oliver Goldsmith without reading up on The Enclosure or Guy de Maupassant without learning something of the Franco-Prussian War? My purpose here is not to deny the relevance of history to art, but to deny that history necessarily has a primary influence on the artist. Would Baudelaire likely have been happier (and hence far less likely to have composed poetry) if he had lived a century earlier or later? The elective affinity for Edgar Allan Poe which incited him to garland the American well before critics on this side of the Atlantic would do so seems to have run deep: both men were stunningly alike in what might be called their moral vision as well as in their specific tastes. Yet what had Paris to do with Baltimore in the mid-nineteenth century? Were we to consult nothing but historical environment, we should have to range Baudelaire–absurdly–much closer to Victor Hugo.
Theoretical scholarship in the literary world is yet more tendentious, in that it advances a kind of “meta-history”. That is, the theoretician believes devoutly in certain biological or psychological forces that construct all history at the foundational level. Freudians see all in terms of primal strains within the nuclear family that radiate far and wide into communal life and public endeavor. Marxists are a disguised species of social Darwinist: they insistently view class struggle as the evolutionary engine driving the ordinary to ever greater prominence in literature as well as life. Gender theorists take their cue from Marxists, substituting for the nondescript little guy various types of suppressed or marginalized sexual practice (starting simply with the female perspective in the sixties, but proceeding to “the gay”, the lesbian, and so forth). Freudian critics have actually grown rather rare today, inasmuch as their theory does not countenance any progressive evolution of the kind so dear to contemporary academics. At most, a Freudian might side with a gender critic in arguing that the nuclear family needs to be exploded so as to liberate future generations from its pernicious influence (a position roundly endorsed by Marxism, wherein family, religion, and ethnicity all interfere with the centralized state’s programs)… but this, of course, would not be Freud’s view at all.
The trouble with all such theoretical perspectives is that they once again, like the general over-emphasis of historical influence, denigrate or ignore the unique character of the artist and his art. Art becomes no more than an expression daringly ahead of its time in the evolutionary struggle. Auerbach made much of Odysseus’s scar, though it would have attracted no notice whatever in a nineteenth-century novel, because at the time writers did not typically award attention to such detail: one small step for the common man. A feminist critic would never suggest that the institution of marriage as represented in Marie de France should be desired by any self-respecting contemporary woman; but the fact that the convention is drawing even modest scrutiny in the twelfth century renders the lais interesting.
In other words, these works have no enduring and independent literate value—no beauty. Indeed, the very notion of beauty is a tool of oppression to the theorist: a manufactured seal of holiness with which the established powers have sought to elevate certain behaviors until they inspire a sort of dumb reverence (e.g., service within the order among the privileged, submission to the order among the subordinate). Beauty is brainwashing—the repeated lie of a bully. The true art work, to the theorist’s eye, is thus adjusted downward, now a mere handmaiden to the progress of “the movement”. Such works are commemorated within a much more flexible canon in the way that the state might commission statues to revolutionary leaders (once they are safely dead). The relatively conservative Freudian may have a little more to offer than this: i.e., art as an imaginative expression of deeply suppressed anxiety. Even here, an ancillary kind of function is implied for the art work. It is therapy—it helps us face dark truths which we would otherwise never confront, and so to “progress” toward happier, better adjusted lives (wherein art, one presumes, will have grown all but unnecessary).
Victimology is the poor stepchild of history and theory. Though the sentiment which drives “victim” or “minority” studies—Black Studies, Chicano/Chicana Studies, Native American Studies, etc., etc.—is essentially the same as that which animates contemporary theory, the theoretical basis is much flimsier. Naturally, a desire to see the working class or women ascend to power in a great evolutionary struggle would seem mean-spirited if it did not extend to other categories of oppressed sufferer; and history offers many such categories, from the enslaved African to the hunted-and-slaughtered New World aborigine. The trouble comes when (or if, since for many academics this moment is indefinitely postponed) one reflects that race and ethnicity are not natural determinants—except for racists—of social status. While nature has imposed upon women a tendency to be physically weaker and to be homebound by the duties of bearing and raising children, and while some degree of class stratification is also natural once a hunter-gatherer unit settles down and begins to trade in markets, no such inevitability links dark skin and slavery or genocide. The Celts enslaved other Celts, the Greeks enslaved other Greeks, and the Romans enslaved both. Africans were enslaving other Africans long before Arab entrepreneurs started selling their heathen brethren to Europeans. As for indigenous American peoples, their struggle with Europe was technological. Having held off the Vikings handily in an earlier era, they were less of a match for mounted marksmen. The shifting tides of cultural circumstance scraped such dismal valleys into history’s landscape, not a lineally moving march of human progress. Indeed, the same explosion of technological progress which promoted the cause of women and the working poor in the nineteenth century was instrumental in plunging American blacks (first as slaves, then as freedmen) into economic irrelevance and in all but exterminating the Plains Indians.
Minority Studies based merely on racial or ethnic distinction, then, can have no meaningful tie to literary art. A certain regimen of readings and discussions may gush with sympathy for a given set of victims—but tragedy restricted to a narrow cast of characters who share a particular skin tone is not a literary genre, let alone an aesthetic. One might as well pool together novels and poems about people named Fitzhugh or Grabowski into a course—and, to be sure, courses about the “Irish experience” or the “Polish experience” have been taught. But literary courses? How does a shared historical experience create literary value? Few such experiences could be more powerful than the Jewish experience of the Holocaust. Would any novel about Treblinka, then, necessarily be a great work? Would the novel’s greatness be affected by whether or not its author was Jewish? Is the easily accessible—in fact, extratextual—reference which such a novel would “enjoy” to overwhelming emotion (that is, the image of death camps sure to haunt its pages whether or not the author artfully evoked them) a claim to artistic excellence? In the same way, does a contemporary mass-marketed film achieve artistic ends when its abrupt and bloody special effects render the audience nauseous?
Aristotle thought not. The ancients were the overt founders of—and quite often sticklers for—the science of aesthetics. The Greek verb aisthanomai means “I perceive”. In this mere name we already find a recognition of the art work’s sensory impression as essential to its success or failure. The work must be of a certain character, and that character seems to fulfill certain structural criteria (Aristotle, for instance, writes famously of the drama’s beginning, middle, and end). To be sure, the ancients sometimes appeared to dwell excessively upon structure—upon the appropriateness of this or that verse form, say, for this or that occasion—in a descriptive vein that had ambitions of being prescriptive. Yet even the most elusive art work—the most nuanced, or suggestive, or what we might call today “poetic” (with our emphasis on metaphor rather than meter)—invariably inspires the perceiving mind to transcend structure precisely by creating a structural framework and then showing its inadequacy. Art works are skillful. They have in common (when successful) an ability to involve the perceiver in certain expectations or to draw him into reflecting upon perceived things in a certain rhythm. They are not mechanical or formulaic for the very reason that once a formula may be composed for a particular genre, that genre must either move forward or grow stale; for the expectation must not be formally expected even before the work’s overture or first word, or else no fresh, immediate experience of the object’s power remains.
Contemporary literary criticism has little to do with aesthetics—is either blissfully ignorant of it, polemically hostile to it, or perversely playful with some obtuse miscasting of it. My position, to reiterate, is not that historical circumstance need not be studied in the effort to understand art better, for expectations can change from century to century. (Few of us, for instance, would know that red was the most beautiful color to the ancient Irish, or that the vowel sounds in “Kokytos” and “Styx” were especially abhorrent to an ancient Greek.) Furthermore, I do not rebuke the attempt to trace certain environmental causes which, as it were, preempt historical accident. The suggestion already implicit in Tacitus that Germany’s climatic and topological conditions magnify her inhabitants’ sense of racial purity (Germania 4-5) may indeed explain some small part of the region’s fierce, lately tragic tribalism. Yet in the matter of art, the preeminent quality which interests us—to which all the rest is merely tweaking and tuning—must be the aesthetic; and the sad fact that academic programs in literature devote not an hour at any level to examining this quality must be accepted as proof that such programs are largely a waste of time, at least for those who truly value literary art.
II. Classical Aesthetics: A Case Based on Common Sense
I will not waste space arguing lengthily on behalf of common sense. The classical view of aesthetic to which I keep referring has just a few basic tenets which must be emphasized here.
First, classical aesthetics recognizes that art is not reality—that art imitates reality (for how could we perceive anything coherently that bore no likeness whatever to a previous object or experience?), but that the two are also somehow insuperably distinct. Early in his Poetics, Aristotle writes, “For in the representations of those very things which we see with distress [in real life] we rejoice as an audience, though they be sharply detailed” (1448b. 4. 10-11). Aristotle cites fearsome animals in example—and even cadavers. The contemporary aesthete might seize upon this passage to justify the “tasteless taste” of the avant-garde, which holds up for admiration such wonders as Crucifixes in urine and Madonnas painted in feces. Aristotle is not prescribing the art work’s content, however: he is merely stressing the “disconnect” between reality and art by remarking that the latter sometimes creates beauty from images intolerable in daily existence.
Immanuel Kant approaches this critical disjunction less from an attention to objects than from an evaluation of the thought involved in their perceiving. His Critique of Judgment is formidably abstract to the modern student, yet is truly dedicated to the same classical position of common sense. Kant explains in the seventh section of his introduction,
… the subjective [element] in a representation, which cannot be an ingredient of cognition, is the pleasure or pain which is bound up with it; for through it I cognise nothing in the object of the representation, although it may be the effect of some cognition. Now the purposiveness of a thing, so far as it is represented in perception, is no characteristic of the Object itself (for such cannot be perceived), although it may be inferred from a cognition of things. The purposiveness, therefore, which precedes the cognition of an Object, and which, even without our wishing to use the representation of it for cognition, is, at the same time, immediately bound up with it, is that subjective [element] which cannot be an ingredient in cognition.
The extreme density of this passage is in nowise Archbishop Bernard’s fault: no translator of Kant has an easy row to hoe. The essential ideas, though, are really not suspect. The beautiful object engages the perceiver’s mind in a fashion which delights it—which both stirs its curiosity and rewards its second and third looks with further detail. (Kant is careful in other sections [e.g., Section 3: “Satisfaction in the pleasant is bound up with interest”] to distinguish aesthetic pleasure from stirred appetite, which involves a recognition of something pleasing to the senses rather than an intellectual assessment. Pornography is not art; neither is good cooking.) This unique engagement of mind by object creates a feeling of purpose—“purposiveness” (Zweckmässigkeit)—which is not part of the perceived thing’s objective nature as are its height, weight, etc. That a cloud may strike the idle investigator as a castle one moment and then as a billowing wave does not interest the meteorologist. Yet the meteorologist, his forecast done, may take time to admire the cloud’s beauty, “even without… wishing to use the representation of it for cognition,” because the cloud’s beauty is “immediately bound up with it”. Beautiful things exist, are real; their reality, however, lies in the subjective response which they elicit—in their power to elicit that response forcefully from a great many subjects—and not in their purely physical character. Shadow is indispensable to many a beautiful landscape, and shadows do exist; yet a tree does not possess its shadow as it possesses x thousands of leaves.
This discussion has already veered into a second fundamental fact about the art object: it has the mysterious ability to make the perceiver perceive more than itself—its physical contours or sounds or words—through the very act of perceiving it. The object, one might say, is always (as a work of art) both itself and not itself, or more than itself. A table or a shoe, devoid of setting, does not share in this quality: only certain rare objects have it for virtually all who behold them. Kant is perhaps less helpful in making the case here than Aristotle. His intriguing idea of a subjectively perceived purpose is seldom assisted by specific examples, leaving us to supply much of his intent. The following statement from Section 12 seems at least to insist upon a kind of free speculation of the perceiver’s imaginative resources: “The consciousness of the mere formal purposiveness in the play of the subject’s cognitive powers, in a representation through which an object is given, is the pleasure itself.” The word “formal” has suggested to later generations of formalist critics that the art work’s quantitative, measurable dimensions are more vital to it than qualitative nuance. Kant in fact declares in the Critique of Judgment, just after the previous citation, that form trumps color. “The colours which light up the sketch belong to the charm; they may indeed enliven the object for sensation, but they cannot make it worthy of contemplation and beautiful.” This prejudice extends back to Aristotle. (Consider Poetics 1450b. 6. 1-3: “If someone were to slap on the most beautiful pigments randomly, the result would not please as much as the black-and-white sketch-artist’s”). Yet I believe that too much may be read into such passages. A certain somewhat recognizable form, after all, must draw the imagination into playful meanders at the art-work’s point of entry. Such a common-sense observation surely does not imply that the same form—or those which follow, if the work is sequential—refuses to release the imagination into a degree of uncertainty. There must be room to wander, just as there must be a room. About the room’s doorway, indeed, one might make much less critical noise if one could be assured of a well-initiated audience; for so ingenious is the “educated” mind at finding formal points of departure that a squeeze-and-smear approach to applying pigments may just produce something highly evocative. If a Rorschach test is generally not viewed as art, the reason is because the perceiver’s initial speculations are so very indeterminate (which is, of course, the test’s special virtue). True connoisseurs of ink blots would no doubt agree upon their own set of tasteful limits after identifying the lunatic fringe.
I do not read Kant, therefore, as straitjacketing art in the least. The majority of the Critique of Judgment is in fact devoted to the perception of the sublime—a speculative experience whose end or “purpose” is emphatically beyond any mind’s ability to hem up in perceptible images. True to his neoclassical time, to be sure, Kant contrasts the sublime with the beautiful. The latter is clearly more on the order of a still-life study, while the former is a stunned look into an alpine chasm. Wherever one may wish to set the limits of various terms, however, the foundational remark to be stressed here is that art uses quantities to create a playful zone in which the quantitative whole cannot be assessed without intellectual exercise. The artist must give us an object, porous and labyrinthine though it may become as we inspect more closely. That the object’s unity is assessed much more comfortably in Mozart than in Beethoven, in Titian than in Vermeer, in Doctor Faustus than in Faust, merely raises questions of degree which direct us to the audience’s sophistication.
Aristotle’s far more material observations about the formation of good drama leave us without any generality on the character of the perceiver’s imaginative play, but they are fertile with implications about its nature. Of the six dramatic elements, writes the Philosopher—plot (mythos), character (êthê—“personal habits”), verbal expression, thought (dianoia—little more than factual precision), visual effects, and song—the plot is the telos or end of tragedy, and thus the greatest element of them all. Even character must take a back seat, The drama’s figures, after all, “are of a certain nature in accord with their habits, but they are happy or otherwise in accord with their actions” (Poetics 1450a. 6. 9-23). In other words, the dramatic work is not a character sketch of a psychological treatise, but a sequence of events creating a perceptual whole as it proceeds from a to z. Just as the black-and-white diagram is artistically superior to the random riot of color, so the bare plot (if the artist, by design or incompetence, should fail to adorn it with the other five elements) would be superior to any amount of costume, eloquence, and music without a plot. The story is the narrative’s unity-giving parameter—the “room” into which this kind of art work invites us.
I shall take for granted (lest I grow tedious) the reader’s familiarity with some of Aristotle’s insights about how the good plot fits together. Surprise is important, yet the story-problem’s dénouement should be such that it strikes the audience as fully appropriate—even inevitable, in a sense—once the completed work is viewed in retrospect. “The release of the story’s tensions must come from within the story itself,” he asserts (1454a-b. 16. 38-1), and not from some deus ex machina. Our attention is further concentrated upon the discrete dramatic creation by a fairly long discussion of limiting the story to a certain time and place (1450b-51a. 7. ff.): “for beauty lies in size and in arrangement” (1230b. 7. 37-38). In a dozen practical ways—with an awareness of compositional detail, indeed, that almost belongs to the “how to” manual—Aristotle insists that a narrative, like a painting or a sculpture, is a single creation which depends for its success or failure upon what is put before the viewer’s eye.
We have clearly established, then, that that the classical aesthetic does not allow an art object to siphon its effects parasitically from some set of attendant circumstances. It creates and exists within its own reality, or at least its own representation of that other, vastly more disputed reality. Within its special world, furthermore, the sensations and images of “real reality” are assembled with such skill that they may no longer be taken simply as they appear. Instead, they hearken to a host of things not quite seen or seen but vaguely: their objects become symbols, their sounds echo half-forgotten voices, their shapes cast shadows that could lead to heaven or hell, and so forth. Reading both Aristotle and Kant (who is in fact more of a Platonist through his attachment to the metaphysical), one might be forgiven for thinking that art is a game from this perspective—a puzzle, perhaps. At first the pieces seem confusingly many, then the intellect becomes engaged in finding fits, and at last the whole absorbs every piece on the table. The parallel is not inept—but neither is it adequate. A game or a puzzle has a true telos: it is plainly finished at a certain point, and the objective was always plainly to reach that point. A detective novel or similar mystery story may often be said without unfairness to “end when it ends”; but such narratives are an inferior sort of art for this very reason. The genuine art work remains resonant with unexamined possibilities. If it is a novel, one craves to go back and re-read it as few crave to re-read a superficial mystery once solved.
Even Aristotle, I believe, is sensitive to this “multivalence” of art. His choice of the word mythos to describe what is usually translated “plot” may have suggestions which we cannot capture in other tongues. A myth, after all, is not just any plot. It enjoys a timeless quality: for some reason, people have told it (whether artfully or poorly) for generations in the confidence that it conveys some undying truth. Now, if this were precisely Aristotle’s view, then he would have betrayed the emphasis that he places on the drama’s self-contained quality in so many other places. The story, that is, would not depend solely on its concentrated unity for success, but rather would steel some thunder from the archetypal narrative at its core. Since the drama of Aristotle’s day was in fact written only about mythic subjects, the inconsistency I have posed may well never have occurred to him; or he may have begun to understand myth in that allegorical fashion which Euripides (his least favorite of the great tragedians) seems to have exploited with particular effect.
Indeed, the third and final common-sense aesthetic observation I wish to make here is none other than that narratives—dramas, novels, short stories, and all other artful ways of relating a sequence of events—are never merely about what they are about. This is properly no more than a corollary to my second observation; but it nonetheless needs to be stated emphatically, because narrative art has a unique connection to “real life”. It takes place in time, as purely visual arts do not (or not in objective time: the minutes or hours a viewer may spend in front of a canvas or a cathedral are entirely determined by that viewer). Though musical performances also occupy a specific value of clock time, they do not refer to real life as does the story. Notes are not men and women: movements are not physical journeys or days at the office. The narrative must actually look like our life—or like someo0ne’s life—in order to be taken seriously, for only a cartoon or a burlesque (if not an inventive—but wholly unliterary—science textbook) would trace the vagaries of an asteroid’s fragment from Jupiter to Mars. Due to this proximity to life, narrative art is exposed as no other kind to moral and political critiques. Its identity as literature is under constant attack from those whose interest lies in its suggestions about practical areas of human endeavor. The word praxis is indeed steadily flowing from Aristotle’s stylus as he writes about drama.
Yet in Aristotle’s case, we can scarcely confuse his matter with our own everyday lives, or even with those of his fellow Athenians. Homer, Sophocles, and every other master to whom he refers would never be accused, even in the fifth century, of scribbling facts rather than commemorating a golden heroic age. The struggle of antiquity’s increasingly literate authors is deceptively simple in this regard: it is all too easy to interpret their myth-based plots, that is, as pledging direct allegiance to the old myths. Their challenge as individual creators enjoying yet-untested discretionary powers was to handle ancient material with new subtlety; and they made progress, not surprisingly, with baby-steps. When we moderns look back upon the undertaking, we are apt to see in it (as did Erich Auerbach) a forever frustrated desire eventually to be rid of the past—to be ordinary, to paint Coke bottles, to call a spade a spade. We may imagine that Everyman, as author, was waging a kind of guerilla war with myth’s claims to eternal meaning.
This impression, I believe, is largely mistaken. The war was not with myth itself, but with the mythic variety of destiny which the old tales applied to complex human behavior like a choke-hold. Aristotle tells us that character is destiny, though he subordinates it aesthetically to plot: he is acutely aware, in other words, that narrative has general lessons to teach—that an individual praxis sends ripples into the universe—and that laying all at the foot of Olympus is a dereliction of authorial duty. The greater message is more important than ever to him: hence the need to tie it more intimately and convincingly to human choice.
As an author of several obscure novels myself, I find it inconceivable that any story-teller of any era would tell a tale with no broader semantic ambitions than the tale’s own terms—unless, perhaps, as a parody. Writers write that readers may read their writing; and who short of a pathological narcissist would believe that serious people would wish to pore over the details of his childhood or his love life without any hope of finding a more general application? Though the myths of the past may be dead and cold today (witness the recently pejorative use of the word “myth” itself), novelists are constantly searching for new ones. The very nihilism of a work like Camus’s Stranger is classical in its insistent, sublime rejection of meaning; for the new myth, to Camus, is that of Sisyphus—the impossibility of finding meaning and of ceasing to look for it.
A more pertinent way of stating my third point, then, might be that narrative art, far more than any other kind, engages the practical world—the realms both of individual and communal praxis (i.e., morality and politics). A story cannot avoid commentary on such non-aesthetic areas of value, and yet it remains an aesthetic work. Its success at that level, consequently, will be determined somewhat by how accurately it is perceived as projecting realities from the practical level. To be sure, other genres of art labor under similar obligations: a portrait of Napoleon on a horse is sure to draw criticism if the horse’s eyes are unrealistically wide apart (though such strictures are relaxed when tastes grow toward the symbolic or the oneiric). Only the dramatist, though, is expected to know psychology as well as anatomy. To an ancient, the psychological paradigms preexisted in the form of myth and—a further luxury—were universally known and accepted. As gods begin to dominate human practice less and less, however, and action instead becomes ever more the product of the individual’s free choice, the guidelines turn fewer and vaguer. A particular view of what motivates the human being is less likely to characterize an entire society, and heated arguments about the success of narrative artists as artists therefore ensue.
It is at this cultural moment that the student of narrative literature discovers a dire need for something like a science of “haeretics”.
III. Haeresis Defined and Applied to Literary Studies
The Greek word haireomai means “I choose”. In its active form (haireo), it shares many of the meanings of its Latin cognate, haero: stick to, cling to, seize upon, cohere (cum + haero). The Latin word, however, is not used metaphorically to express a choice—that is, something to which one figuratively binds oneself (the sense of “oneself” being captured by the Greek middle voice). The Greek word presents the further difficulty to us that its noun form, hairesis, is the culprit from which we draw our baleful “heresy”. Of course, a heresy in the original sense is indeed a mere choice; in historical context, it is a choice made against the most urgent advice and the highest authority of a precedent’s (especially a religious precedent’s) recognized exponents. It connotes willfulness—the desire to exercise one’s power of choice when one would do better to follow custom. For academics, no doubt, the heretic enjoys the same mystique as the rebel and the revolutionary.
None of this need give us an instant’s hesitation, but I mention it to remove any suspicion over my choice of the term. I merely wanted a word that would express something like “the study of choice, of free will”: haeretics was an obvious selection (or would you prefer a monstrosity on the order of “selectics” or “selology”?). There must be several reasons why the intelligentsia of the Western world has never produced a term for this “science”. The most innocent would be that we have regarded ourselves since the days of general literacy—i.e., since the birth of science itself—as self-evidently free. Why create an area of study for a faculty which lies at the very foundation of human nature, and which is hence a fact rather than a variety of options? Only for about half a century have scholars grown aware that competing notions of freedom and destiny are not colorful manifestations of cultural divergence, like headgear or favorite foods, but rather a “shadow line” of that spectrum running from orality to literacy, from tribalism to individualism.
Furthermore, the contemporary academy seems to have a vested interest in suppressing even this insight. Two of the trained intellectual’s bedrock axioms today are a rigidly enforced cultural relativism (“no culture must be considered in any way superior to any other”) and a metaphysical nihilism (“all human truths are biological at their origin: ‘mysteries’ such as free will are illusions”). The positive relation described above between literacy and free will appears to “privilege” literate cultures over simpler ones, which smacks of “unfairness”. Beyond that, and more generally still, the substantial existence of free will would imply doubts about our humble status as mere higher primates. If the human were permitted to be more than a sub-category of the mammalian, then humans would be impossible to study through scientific method (including the quasi-science of history) beyond a point, and the scholar would be faced with having to say occasionally, “I don’t know.” As progressive as the professoriate likes to imagine itself, then, this manner of evolution leaves it highly uncomfortable. Under the circumstances, the discussion in which I am about to engage is assured of marginalization in any university setting. One might well call it heretical.
Yet no professor should take issue with the most transparent objective of the science of haeretics: to make readers more aware of the assumptions about human choice (does it exist at all? if so, how far does it extend?) implicit in works of unfamiliar cultures. The thrust of such study, after all, is relativist in that it may dissuade students from thinking Gilgamesh or Achilles (for instance) an insufferable, vainglorious bravo just because he isn’t “one of us”. Students may be brought to understand, instead, that only the godly part of such a hero’s DNA possesses free will—and that, in the gods, choice is radically free, the whimsy of an enfant terrible rather than the dutiful calculation of a good citizen. Then, we may hope, they will no longer associate the hero’s repellent wildness with any human character-type. In texts from oral, tribal cultures, the human is that side of the god/man heroic formula where virtually no free will is to be found. Both Gilgamesh and Achilles (to return to these most anthologized of heroes) are at last condemned to tragic resignation once they realize that mortality is an invincible drag upon their headstrong godly impulses. Let the professor, then, but make this case, and his or her undergraduates may not only detest the ancient superhero a little less after seeing the human victim within his schizoid nature: they may also relinquish some of their gathering sentiments of cultural one-upmanship. Gilgamesh is not the swaggering team-captain in a letter-jacket whom they recall with loathing from high school: he is a failed god, a star without enough mass to ignite—an idle fantasy of absolute power (snatching virgins at will, commanding mountains to fall) that collides hard with the waking world. In short, he is us.
These are fully worthy objectives within the parameters that I have posited, and I say so quite without irony. A small dose of relativism is morally salutary—no saint ever lost his halo for double-checking his bearings. Yet as an objective, the relativist perspective is also rather superficial. More challenging is the application of haeretics in a truly aesthetic manner (for the backfilling of cultural information which I have just described would be feasible for both good and bad stories without its degree of thoroughness having any effect on their literary goodness or badness). If a literary narrative is a sort of Aristotelian time-bomb, and if the tensions within it are the source of its ticking, then the anguish endured by principal characters in making critical choices may be an important source of tension. In cultures which believe implicitly in free will—in true moral virtue and in genuine guilt—we indeed expect such stresses. If the characters of a modern novel do not struggle over their choices in the least, then we must be dealing with a “thriller” where the Titanic is sinking, an earthquake is about to eat Los Angeles, or some other extrinsic menace imports vast calamity; and works of this sort we do not rate as literary classics. Contrast the thriller genre with Joseph Conrad’s oeuvre. Ships still sink and jungle drums still beat—but the real point at issue is the protagonist’s will power. A young idealist called Lord Jim by the natives so insistently takes responsibility for his actions (even though sane men like Marlow know that much in life lies beyond our control) that he finally offers himself up in a kind of sacrifice. Though Conrad implies that Jim’s death is tinged with fanaticism, he also clearly admires the brave assertion of free will in the teeth of so much evidence against it. An alter-ego for Jim, perhaps, the sinister Kurtz in Heart of Darkness retains shreds of this admiration, although the fully evolved Kurtz asserts a freedom without responsibility—the Gilgamesh-like god-whimsy which the muzzle of a gun bestows upon him. The reader’s calculation of just how free such characters are and where their real-or-imagined freedom may lead them works intimately with the plot to generate suspense. One would indeed be hard-pressed to find another author in whom the “aesthetic of free will” so dominates the completed impression.
Now, a more traditional work emanating from a culture which held the gods to be fully in control of our destiny would draw little suspense from internal struggles of various characters over consequential decisions. Such characters, more likely, would pray and then follow the omens. Homer, for instance, could not have intended Odysseus to be faced with a complex moral determination when Hermes advises him to sleep with Circe. The contemporary student often fails to understand this and thus cannot give a proper aesthetic estimate of the Odyssey. He or she probably reasons, instead, that Odysseus is far from home when a lovely goddess falls into his arms and that he thereupon responds like the typical sailor, despite his long-suffering wife’s preservation of her fidelity at immense personal risk. The moral incongruity elevated in this view sends a line of fracture through the epic’s aesthetic whole: even after the hero bravely reclaims his wife and kingdom, an odor of hypocrisy clings to his victory. The student in such cases has failed to grasp that a) mortals cannot refuse the advances of goddesses (a point more applicable to Calypso’s case than to Circe’s), and b) the sex act is more an apotropaic compromise in these “affairs” than a roll in the hay (the goddesses often being associated with death itself, and the unions—in the Odyssey and elsewhere—being distinctly, symbolically infertile). Odysseus’s “choice” is to escape being swallowed by the shadowy force embodied in the two goddesses who detain him: it is not to savor the joys of a convention in Las Vegas, where all that happens stays permanently hidden.
The previous example, like most one may adduce to show the risks of injecting too much choice into a traditional story-world, works largely by negative application. That is, it shows how a text may be aesthetically undermined if we impose our expectations of moral freedom upon the work of a culture which has more limited expectations. In fact, my odysseian example somewhat illustrates the previous point about the need for reading with a certain cultural/historical awareness (and again, I do not argue against the usefulness of history in informing aesthetic impressions). The difference is that here the literary text itself is damaged if we fail to reckon properly its mythic backdrop—the full implications, specifically, of sleeping with a dark witch-goddess. Interpreting Gilgamesh as an “arrogant high school jock” is likewise a gross anachronism; but it has little effect on the inner coherence of the epic, since Gilgamesh’s choices are shrouded in mystery, anyway. That they are a god’s choices rather than a man’s points us outward to ancient Sumeria’s conception of the divine rather than inward to the resolution of specific tensions in the work. In other words, there is no intratextual question on the order of, “Just how much man and how much god is Gilgamesh?” floating about in search of resolution by the plot, which only mirrors judgments of the broader culture. Frankly, treating ingenious translations of ineloquent, centuries-old tablets or scrolls as authentic texts has problems in itself. One can scarcely speak of an aesthetic whole when one has little more than a series of broken telegraphic messages.
In any event, let me offer another example of a traditional work where the imputation of free will would merely vitiate the aesthetic effect: folklore. An Irish folktale about (or titled, one may say with license) “Johnny Doyle” has the eponymous character slaying a series of giants after appearing to promise them quarter if they would reveal vital secrets to him. The literate modern audience must squirm at Johnny’s cynical freedom from scruples—but this sort of ruthlessness typifies folklore and is clearly not perceived by the original audience as tarnishing the hero. The realm of choice in behavior includes one’s ethnos, one’s clan—but not one’s tribal or ancestral adversaries, who may and should be treated as if they were subhuman. To quibble about the folktale’s (this or any other folktale’s) parochial view of freedom would be to destroy the tale’s pleasure—to shred it with complexity and leave its unearthed contradictions (as they appear to the literate) grist for some psychoanalyst’s mill. Who can doubt that no folktale was intended to be so scrutinized?
In my final remark about this tentative field’s elusive definition, however, I must somewhat double back upon myself. Grant that the joy of a folktale lies in not reading it too finely, and that the pleasure of the Odyssey rests partly in a dream-like atmosphere wherein deeds have few far-reaching consequences. (The gist of the epic’s first half, at least, might be framed, “Live another day, see another island.”) Is the moral issue to be shoved aside in such cases just because it would not have presented itself to the original audience? Since the unique nature of narrative aesthetics presses it to borrow tension from non-aesthetic sources (e.g., from moral dilemmas as well as—or rather than—smoking volcanoes), is not part of the aesthetic evaluation in this single instance an assessment of how coherently moral values are satisfied? Granted, again, that earlier works draw very lightly from moral sources of tension (even Aristotle is enough a child of his times to allow that good drama need not use profound characters [Poetics 1450a. 6. 23-25]): do not such works place their alternative scheme of destiny on trial by default? If they portray a world where gods rather than humans make most of the decisions, is the beauty of their portrayal not at least somewhat beholden to how real it appears? Do superhuman or inhuman forces in fact determine most of life’s turns? Perhaps the gods are psychic forces, as Euripides and Vergil seem to represent them: passions warring against more deliberative elements, usually in a struggle that has no clear victor. That approach surely offers many more realistic applications to life than pouring wine into the sea in the hope of fair weather. One can extend the allegorizing vector to include genetic material: perhaps our actions, like our looks, are written as divine will in the double helix. Yet what solace can those of us attracted to such predestination draw from immolating goats and bulls? Is the House of Atreus’s penchant for slaying its own an allegorized exhortation to purge the genetic pool?
Could it be, in fact, that most or all of our educated explications de texte cover up the inadequacy of ancient narratives to address abiding, genuine life-issues (as opposed to antediluvian superstitions)? Even the reading which I offered earlier of Gilgamesh—that the hero is a limited mortal creature having to grow out of humoring his fantasies—is a contemporary interpretation. It is a flattering and a powerful one, as well, and it enjoys a high chance of success with undergraduates; but no Sumerian three thousand years ago would have advanced it or even understood it. What we forget about our symbolic exegeses of various myths is that the symbol is the reality in the original setting, and hence not a symbol. Gilgamesh was true man and true god, not a condensation of the abstract struggle we all have with seductive whimsy. Is the tale that succeeds with us, then, ancient Sumeria’s or our own?
The naïf will howl that “realistic application” is in the eye of the beholder, and that to claim that it is lacking in ancient myth reeks of cultural prejudice. Such misguided tolerance damns the mythic world—or any other narrative world—with condescending praise. An adult would not, I presume, list Peter Pan among the three greatest novels of all time, though it is a classic in its proper place. Ancient cultures feared various morphoses of bogey-man much as children fear the dark, and we likewise do not disparage their efforts to exorcise their special demons any more than we start trembling when the sun sets. Whether through allegory or by some other means, however, we at last demand that the greatest narratives should credibly address the most credible concerns of thoughtful humans. The narrative accepts our challenge when it undertakes mimesis—the imitation of life—for then it promises to paint us as we are. A story would be ruined if a man were to laugh upon receiving news that his son had died, or if a woman were to walk day and night to see how far a drink of water would carry her. Unless such behavior were the stuff of lunacy or else couched in allegorical terms (and even allegories, if well done, should be prima facie plausible), the pretext of the tale would be simply ridiculous. Our common humanity—our fears, ambitions, moods, ideals, and various other motivations—is the medium of the story-teller, and in his aesthetic chore he is limited by the extra-aesthetic quality of that medium. He may no more expect us to appreciate a product created from distorted, absurd motives than a musician could expect to win us over if every note of his score were shifted flat.
Let the exponent of cultural tolerance, then, preach that humans at certain times and in certain places hate their own children or eat their own fingers, if he values his rhetorical pose so highly. He will end up with no other reason for studying literature than as historical artifact (though it will indeed contain fascinating artifacts, if human behavior may be finessed into any shape by environment). Yet historical enlightenment simply does not account for why most of us read literature. Narrative, in particular, has always partaken of the philosophical, and especially of ethical philosophy: people read and have always read stories, that is, so that they might know how to live. Socrates and his successors referred to Homer’s texts liberally as prescriptive ethical treatises (sometimes taking exception to them: cf. Plato’s Hippias Minor). In the good life, after all, there is a kind of beauty. The order which characterizes the good man’s actions—his trusty adherence to certain principles—lends itself to creating the gravity, the lines of tension, necessary in a riveting tale; for bad men may do anything, but a good man may only do thus-and-so.
I am aware that a thick obscurity still lies over these issues; and if I have not inspired the reader with the same awareness, then my expressions have been taken far more reductively than I intended. I certainly do not mean, for instance, that the good story must be a sermon of some sort which plainly endorses a particular belief system. All definitions of artistic phenomena should preserve a place for the indefinite, for such is the nature of the object. That I may do better justice to the complexity of these matters, I should like to focus intensely on just a few texts in the next section. I hope to illustrate how loosely the final aesthetic judgment may be connected to cultural prejudice, such that judges occupying the same moment in place and time may indeed favor vastly different narratives having little or nothing to do with their shared cultural programming. The divergence in these cases can only be ascribed to differing views of human freedom: of the duties incumbent upon people, the motives embedded in their nature, and their ability to act upon all such internal directives.
 Erich Auerbach’s remarkable mimesis—written, I might stress, without notes during the author’s self-imposed exile from fascist Europe—is the classic Marxist literary monument of this nature.
 Another commonplace in “lit crit” discussions of the past half-century is Mikhail Bakhtin’s “heteroglossia”, as described in his long essay, “Discourse in the Novel”. Mere months ago, I attended a paper-reading where the presenter descanted the famous lines: “These distinctive links and interrelationships between utterances and languages, this movement of the theme through different languages and speech types, its dispersion into the rivulets and droplets of social heteroglossia, its dialogization—this is the basic distinguishing feature of the stylistics of the novel” (trans. C. Emerson and M. Holquist in The Dialogic Imagination [Austin: U of Texas P, 1981], 263). The presenter actually apologized to the audience for reading the passage, explaining that he had been advised against doing so since “everyone had heard it before”—as was likely the case. Yet what, after all, do these gilt words declare? That the novel’s characters speak differently from each other and from the narrator (a “fact” which is far more valid of nineteenth-century European writing than of the twentieth-century American novel)? Why should the presence of such different registers be considered a major “distinguishing feature” of the novel? Answer: as a gauge of class warfare. In the hands of recent critics, “heteroglossia” appears to be the smoking gun of social hypocrisy: the “voices” of the marginalized peep plangently or shriek in protest at an indifferent bourgeois mainstream. Is the implication, then, that all these little voices should be fused into a kind of classless Esperanto, or rather that everyone must be allowed his or her own patois in an anarchistic Rubble of Babel? Inasmuch as any progressive theory is implied over and above Bakhtin’s perfectly bland observation, it must finish in incoherence.
Note, finally, the “sacred text” status of this Bakhtinian pronouncement which has engraved it into the memory of every Ph.D. in literature (such that it need no longer be read): if literary criticism were truly blazing new trails in high progressive fashion, and if young literary scholars were truly bound to read the latest criticism in order to stay competent in their field, then why would a text written almost a century ago (largely without library resources, like Auerbach’s, as the author endured a long exile from totalitarian forces) remain indispensable? What biologist reads Lamarck today? What ophthalmologist reads Descartes?
 Yet another non-sequitur running deeply through the theorist’s worldview: if such bio-cultural determinants of status as physical strength and possession of arable land or metal armor must be surmounted in order to liberate certain classes of people, then the conquest of nature through technology must be a good thing—but the progressive inevitably identifies him- or herself with a romantic “green” regard for unspoiled nature. Incoherence is particularly on display when literary scholars struggle to praise aboriginal cultures for their earth-loving practices despite the frequent presence in such cultures of slavery, ruthless treatment of female adultery, etc.
 Certainly my personal exposure to Comparative Literature programs several decades ago suggested no interest in or tolerance of the classical notion of imaginative play governed by universal tendencies and structures within human rationality. On the contrary, the aesthetic was just another cultural category—whether to wear a ring in the ear or through the nose, whether to let hair grow short or long, etc. Jean-François Lyotard built his career around the contention that there could indeed be no transcending aesthetic principle. Similarly, a brief online search of academic references to aesthetics indicates to me that the word proliferates currently as shorthand for a) the loving scrutiny of objects usually deemed too insignificant or squalid for scrutiny (viz. Any Warhol), or b) the insistently non-moral admiration of objects or events whose nature is usually deemed to demand a moral response (e.g., mangled bodies or images of sado-erotic exploitation). The variety of experience clearly need have nothing whatever to do with good taste—should indeed, apparently, contribute to mocking the bourgeois assumptions underlying “taste”. For instance, the journal New Studies in Aesthetics carries issues dedicated with disturbing promiscuity to “The Aesthetics of Decay”, “”Toward an Aesthetics of Blindness”, and so forth.
 From John Henry Bernard’s translation of the Critique (republished by Cosimo [New York: 2007]), pp. 19-20.
 Ibid., 42.
 Ibid., 425.
 Lyotard seems to have taken Kant’s discussion of the sublime as a confession that irrationality ultimately dominates the aesthetic experience and that art therefore can have no transcending, universally binding effect on human beings. I cannot claim to be very familiar with this author. I would only note that Kant carefully (and quite eloquently, by his standards) relates the illimitable quality of the sublime experience to ideas beyond the reach of empirical reason—ideas, for the most part, pointing metaphysically to God. This is anything but a renunciation of the attempt to find common humanity in art; and, unless I am misstating Lyotard’s position, I believe he must mistakenly have equated that which is not apprehensible with that which is irrational.
 One of hisotry’s great parodies, for example, is Marivaux’s Iliade Travesti, in which a couple of buffoonish yokels attempt to negotiate daily life by consulting Homer—or, more exactly, Fénélon’s Télémaque. In our time, the New New Novel promoted by Alain Robbe-Grillet is a far more thorough indictment of the notion that narratives can in fact usefully summarize, legislate, or otherwise represent life; yet such works as these, I contend, must remain mere parodies precisely because they refuse to attempt the capture of life’s meaning in any measure. Carried to their logical conclusion—i.e., beyond the point where they remain new and amusing—they would spell the end to all reading of narratives, and hence to the writing of them. This day will never come, however, because the compulsion for human beings to reflect generally upon their specific experience is irrepressible.
 One might try to get away with pronouncing “haeretics” with the accent on the middle syllable to encourage a dissociation from “heretic”; but the middle “e”, alas, is an epsilon and not an eta.
 Note that what I am calling relativism here can be couched in universalist terms: i.e., the thesis, “You are wrongly judging this culture from your own cultural presumptions,” can be worked into the formulation, “If you understood this culture on its own terms, you would discover that it expresses some part of yourself.” True relativism is anarchy: it is the blunt impossibility of one community’s forging compromises and sharing values with another. I find that academic exponents of the “to each his own” dogma are far more often believers in common humanity who have not unraveled their own rhetoric.
 Though some accounts give Odysseus a son—Telegonus—by Calypso, the story is non-Homeric; and while it is true that Gilgamesh does refuse Ishtar’s advances, Gilgamesh is no mere mortal, nor is Ishtar a goddess of the Underworld variety. The differences matter.
 See Seán Ó Conaill’s Book, ed. Séamus Ó Duilearga (Dublin: Comhairle Bhéaloideas Éirinn, 1981), 6-23.
Dr. John Harris, founder and current president of The Center for Literate Values, is also Senior Lecturer in English at the University of Texas at Tyler.