Translation Quality Assessment (TQA)

 As English became the universal language of science in the 20th century, most scientific research is written in English all over the world, including the Arab world.

At the same time, there is a growing demand for communicating scientific knowledge to the public in the form of popular science magazines and TV documentaries as well as encyclopedias and books.

Consequently, there is also an increasing call for translation of these

works into language for the ‘everyman’ reader. It is, therefore, essential for translators and translation trainees to be aware of the translation problems that such popularizations may pose and the factors that affect the quality of translations of such writings.

Accuracy and Equivalence

A TT has always been assessed in terms of its relation to the ST, traditionally called the relation of equivalence. The concept of equivalence, however, has so far proved elusive to definition (cf. Bassnett-McGuire 1991; Pym 1992; Baker 1992). Among the most

influential works on equivalence in translation are Eugene Nida’s (1964), who distinguishes between two types of equivalence: formal and dynamic. Formal equivalence focuses on the form as well as the content of the message whereas dynamic equivalence focuses on producing an equivalent effect on target language (TL) readers by

tailoring the message to the linguistic specifications of the TL and the target culture. In other words, when the aim is to keep as close as possible to the ST in content and form, the translator would produce a formal equivalence, but when the aim is to make the TT conform to target culture conventions and read like TL original texts, the

translator would be producing a dynamic equivalence.

 Nida does not ignore the fact that keeping close to both the content and the form of the ST is often not possible, and therefore considers, as a general rule, that content should always take priority over form if an equivalent effect is to be achieved. Obviously, if this rule is applied to poetry where form is as important as, if not more important than, the content, an equivalent effect cannot be achieved.

The importance of Nida’s work lies in his attempt to systematize translation methods and assessment. His concept of equivalent effect, however, is vague: equivalent effect on potential source or target readers defies scientific measurement, and also, there are language and cultural differences regarding what is considered as the equivalent effect of a ST in the TL (Munday 2001: 42).

Newmark (1981) builds on Nida’s work, but even though he questions whether the effect produced by STs could possibly be reproduced on TT audiences, he does not completely abandon Nida’s concept of equivalent effect. Using Nida’s dynamic and

formal equivalences as a basis, he identifies two types of translation as “correct”: communicative and semantic.

The choice between semantic and communicative methods for Newmark seems to depend on the genre, for he assigns serious literature, autobiography and any important political or other statement to semantic translation where the criterion of assessment is the accurate reproduction of the significance of

the ST. As for non-literary and technical writings, communicative translation should be applied, the criterion of evaluation being the accurate communication of ST message in

the TL (Munday 2001: 45). Determining the levels at which the significance of a text and its message are to be found and measuring accuracy in each case remains, however, subjective. The translated text (TT) may be assessed by experts such as professional translators, translation or language teachers and others, including the researcher. Assessment parameters, that may or may not be clearly stated, are in most cases those used in translation courses and, therefore, it will be referred to here as the”pedagogical approach”, although it does not differ considerably from the assessment methods for professional accreditation (ATA, 2000). There are no means to prevent that the evaluator assesses the translation by comparing it to an ideal text she could have produced

herself, thus projecting her own individual standards or prejudices

onto the actual text.

In that way, the evaluator’s experience on the subject warrants her opinion about the quality of a TT. Thus, it does not provide an objective measure of quality in translation, but it has been used to investigate the translating process (e.g., Jensen, 1999; Tirkonnen-Condit, 1986). Some authors have suggested that a comparison between the propositional analysis of STs and TTs should provide an objective measure of quality, namely the proportion of ST propositions that are also present in the TT (Dillinger 1989; Militão 1996; Tommola& Lindholm 1995). Thus, the propositional content figures as atertius comparationis. Such a comparative analysis is, in my opinion,

not the path we should strive to. In order to discuss a concrete

example, I will next present the work by Militão (1996). It has

never been published and deals with written translation, whereas

the other above mentioned works deal with simultaneous interpreting.

Militão (1996) asked professional translators to translate a text

containing cultural and spatial or “orientational” metaphors. Cultural

metaphors relate concepts with other categories that are culturebound(She speaks in italics), while “orientational” metaphors occur when concepts are organized in terms of the more basic system of spatial orientation (I’m feeling up today). Her aim was to investigate whether the type of metaphor (cultural vs. “orientational”) influences the cognitive processes involved in translating a text.

Based on cognitive theories, she hypothesized that as “orientational”

metaphors are based on semantic components that could be found

in different cultures, they may be preserved in translation. She

analyzed all metaphors in terms of their propositions and compared

them with the analyses of the translated metaphors. As she had

thought, more metaphors of the cultural type turned out to be

preserved in the translations, as compared to “orientational”

metaphors.

In their attempt to design a method of translation that can offer a systematic approach to the task of translation, Hervey and Higgins (1992: 22-24) reject the principle of equivalent effect and criticize it as misleading and unhelpful for several reasons. First, measuring the exact effect of a ST is hard and problematic. Second, this principle presumes that a translator is able to know what effect the TT will have on its recipients.

These two problems indicate that any assessment of equivalent effect will not be objective, because translators will have to substitute their own subjective interpretations of what effects a ST has on its recipients and a TT on its intended audience. Third, translation between any two languages is a translation between two

different cultures, and, therefore, any effects of STs and their TTs will never be the same. Finally, in the case of STs written at a relatively distant point in the past, even if an objective equivalent effect is attainable, there is the problem of determining the effect of such a ST on its original audience. There is also the question of whether to reproduce the effect of a ST as it was on its original audience or as it is on a modern SL audience. Any attempt to determine such effects will, of course, be merely speculative. In short, the principle of equivalent effect is intrinsically vague and poses too many methodological problems for it to be applied in a

systematic study.

The more detailed a system is, the more difficult it is to apply, and to achieve intersubjective reliability. Most systems are based on the researcher’s own interpretation of the propositional content. Second, data interpretation depends on those criteria according to which a certain TT should be assessed. As shown, cultural metaphors tend to be more easily leveled out, e.g. through paraphrasing. Nevertheless, this is a natural process, due to the fact that cultural metaphors, as opposed to “orientational” ones, are generally not bound to language-independent semantic structures.

So, they “survive” only after some kind of re-creation. The same

fact (leveling out a metaphor) could thus be interpreted either as an

error or as a useful strategy, depending on the type of text, audience

etc. A similar problem is faced by qualifying the translation according to the reproduced information, as verbatim, paraphrase

etc., as done by Dillinger (1989). In this case, although there is

promising work on systems that automatically extract informational

content from texts (Foltz, 1996; Rieger, 1988), a translation is good

not only because it shares ST content. Where there is no empirical

study on how translations of various types are produced, such a

tertius comparationis should be affected by the researcher’s own

notions. In other words, this scientific approach is also in danger of

revealing more about the researcher’s opinions than about translation quality.

The problem of validity

Having presented and discussed some methodological problems,

there is a more far-reaching question to be dealt with. This is the

question of validity. A measure is valid only when it really measures

what it is supposed to measure. This is not an easy question when it

comes to translation quality because, as stated right at the beginning, there is no consensus on what it means. In the pedagogical approach, it is up to the evaluator with her experience to spur the quality of a given work. In the scientific approach, a common research strategy is to define “quality” in the first place, and then look into the data. This is why House (2000) begins her section on the quality of translation by stating that translation quality assessment requires a theory of translation.

In my opinion, this is not very convincing. First, for epistemological reasons, since a first-order theory based on empirical data always comes ahead of second-order, theoretical formulations (cf. for TS, Königs, 1990). Data about the quality of translations in terms of text characteristics could be of great interest in the investigation of fundamental questions about how translations are produced.

A case in point is whether working memory is important for translating as it is for creative writing, where it is known to influence production time and text quality (Ransdell & Levy, 1996). If the answer is positive (as it seems to be, cf. Rothe- Neves, 2002), this piece of information is useful to understand translating under time pressure; an issue that has certainly more to do than only with cognition in translation business. Then, it follows that we should be able to keep track of translation quality before theorizing it, or – as it is known – in a theory-independent way.

Secondly, it is not very convincing for methodological reasons, with

regards of what was previously said about using an interpretative system that is not backed up by actual translations. Hervey and Higgins (1992) adopt the more practical principle of inevitable translation loss, which means that every translation involves a certain degree of loss in meaning. Consequently, the translator’s task is not to seek the perfect or ideal translation but to reduce the translation loss.

To achieve this aim, the translator will have to decide “which of the relevant features in the ST it is most important to preserve, and

which can most legitimately be sacrificed in preserving them” (Hervey/Higgins 1992: 25). Their concept of translation loss not only includes the inevitable loss of ST textual features, but also translation “gain” or addition of textual features to the TT that are

not present in the ST, such as using TT words that have connotations not present in the ST. The translator’s task thus moves from chasing an elusive ultimate translationby trying to maximize similarities between essentially two different texts to the more

realistic task of reducing translation loss by minimizing the differences between the ST and the TT.

According to Dickins, Hervey, and Higgins (2002: 21-25), translation loss is not a loss of translation, but of textual effects, and since effects cannot be quantified, loss cannot be either. It can, however, be controlled by continually asking if the loss matters or not, in relation to the purpose of translation.

Nida’s equivalent effect, it deals with specific identifiable textual features and not with the effect of a text as a whole. In addition, as is clear from the above mentioned test, identifying the genre properties is essential to determining textual relevance. There are also questions regarding the translation: its purpose, its audience, its time and place and its medium, the answer to which constitutes the translation brief. This brief can, then, be used to decide the strategy that should be followed in translation.

The information in the brief along with the genre requirements can help reduce the subjectivity in determining

textual relevance.

As discussed so far, controversies may be raised on whether

the scientific approach can fulfill the needs of investigations on

translation quality. Probably, those problems derive from the fact

that, coming from theoretical linguistics, science is envisaged as

consisting of deductive reasoning. In fact, deductive reasoning is

quite productive in science, but it helps mostly when there is sufficient empirical knowledge to support it. This is perhaps a good reason for us to return to a pre-scientific status in the area of translation quality which is represented by the first assessment method presented above.

As discussed in the next section, there are methods

to extract subjective information in such a way that it can be

statistically reliable. So, we could improve the pedagogical

assessment of quality in order to generate research-useful, firstorder

data. As the momentum in TS seems to call for more empirical

work before we begin with generalizations, how can we deal with

the issue of validity as part of this pre-scientific move? In order to

be consistent, it seems that the same source of information has to

provide evidence for both the validity and the reliability questions,

that is, we should be able to collect empirically justifiable data to

build valid and reliable answers.

As said, this study was carried out from the perspective that

traditional methods that do not use an independent system of

assessment could be improved by refining their data collection

techniques. The choice here is to skip the researcher’s own subjectivity by letting translations be assessed by others. These referees will be called external evaluators because they are not involved in the research process: they are not aware of the hypotheses to be investigated. It is not a new road, on the contrary, it has been proposed quite long time ago by Nida & Taber (1982, p.170 et seq.) in the form of “practical tests”. Nida & Taber proposed that normal readers, whom the translation addresses, should read the translations and react to them following standard forms (cloze test, alternative choice etc.). Individual prejudices should be naturally overcome through sampling techniques.

In my opinion, the assessment through external evaluators

presents at least two advantages. First, it does not require the use by

the researcher of a tertius comparationis, be it an ideal translation or

an analysis system. Secondly, if, contrary to Nida & Taber, the

external evaluators are translation professionals (translators,

translation teachers etc.) who share similar contextual conditions with the translators who produced the TT to be assessed, assessment data could be taken as a portrait of those quality criteria used at that time and place, provided that subjective data are treated in such a way that it objectively captures whatever intersubjective parameters emerge.

The analysis of Text

According to Nord (1997), it does not matter which text-linguistic model is used in analysis as long as it includes “a pragmatic analysis of the communicative situations involved and that the same model be used for both the source text and the translation brief, thus making the results comparable” (Nord 1997: 62). Munday (2001)

summarizes the following intra-textual factors listed by Nord (1991) as one possible model for ST analysis:

• subject matter;

• content: including connotations and cohesion;

• presuppositions: real-world factors of the communicative situation presumed to be known to the participants;

• composition: including microstructure and macrostructure;

• non-verbal elements: illustrations, italics, etc.;

• lexic: including dialect, register and specific terminology;

• sentence structure;

• suprasegmental features: including stress, rhythm and “stylistic punctuation.” (Munday 2001: 83)

Hervey and Higgins’ (1992) model of translation includes a schema of five filters or categories “through which texts can be passed in a systematic attempt to determine their translation-worthy properties” (Hervey/Higgins 1992: 224). These categories are the genre, cultural, formal, semantic, and varietal filters. Analysis on the genre level includes identifying the type of communication (oral or written), medium of communication, text type and ST subject.

In other words, this filter includes Nord’s factors of subject matter and composition. Although analysis of non-verbal elements is not

explicitly discussed in Hervey and Higgins’ model, any examination of the main genre properties performed in this filter should take account of such elements. The cultural filter examines all features in the ST that are exclusive to the source culture or source language and which in translation can involve a degree of cultural

transposition. This filter covers the element of presuppositions in Nord’s (1997) model.

The semantic filter, which analyzes textual features related to literal and connotative meanings, includes the factor of content in Nord’s model, while the formal filter analyzes features on the inter-textual, discourse, sentential, grammatical, prosodic and phonic/graphic levels of the text, thus covering the factors of composition, sentence

structure and suprasegmental features.

 Finally, the varietal filter examines textual features related to dialect, sociolect, social register and tonal register that may be present in the ST, and this filter covers Nord’s lexic factor. It is evident that the elements identified by Nord in her suggested model of text-linguistic analysis are all included in Hervey and Higgins’ schema of textual filters. Their schema, however, ensures that the analysis of the ST is performed in a systematic way without neglecting any textual property. Also, they provide more detailed categories than is mentioned in Nord, which proves very helpful to the translator when faced with the complexities of linguistic and textual features of a text.

Translation Abstract

Translation is normally performed by assignment from a client, who could be called the initiator. This initiator needs the translation for a purpose and ideally (s)he will inform the translator of that purpose along with other details to help the translator produce the required TT.

According to Nord (1997: 30), these pieces of information are called

by Vermeer (1989) translation commission, by Kussmaul (1995) translation assignment, by Nord (1991) translating instructions, and by Fraser (1996) translation abstract. Nord (1997) adopts the term translation brief, because it best describes the type and function of the information to which this term refers. The term “implicitly compares the translator with a barrister who has received the basic information and instructions but is then free (as the responsible expert) to carry out those instructions as they see fit” (Nord 1997: 30).

The translation brief helps the translator draw profiles of the ST and

the required TT as well as decide from the very beginning what type of translation is needed. It includes (implicitly or explicitly) the following information:

• the (intended) text function(s),

• the target-text addressee(s),

• the (prospective) time and place of text reception,

• the medium over which the text will be transmitted, and

• the motive for the production or reception of the text. (Nord 1997: 60)

In other words, the translation abstract is not intended to tell the translator what translation strategy or type to choose, but to help him/her make these decisions. When experienced translators infer the purpose of a text from the translation situation, such as translating a technical ST into a technical TT, the information inferred acts as a translation abstract and is called by Nord (1997: 31) “conventional assignment.”

The collection of Translation Problems

In Hervey and Higgins’ (1992) method, translation follows a top-bottom approach as the translator is required to ask several questions that determine the genre aims and properties, the TT audience, the intended function(s) of the TT and all the information needed to form the strategic decisions before embarking on translation.

These decisions are related to determining the textual relevance of ST textual features which are identified in the ST analysis. Depending on the translation brief and the genre properties, the translator has to decide which features are of high textual relevance

and must be retained in the TT .

In other words, the strategic decisions determine which ST features will be reproduced in the TT, and whether the methods of translation will be SL-biased or TL-biased. Consideration of the genre requirements and the information in the brief also helps the translator in determining the methods for dealing with the problems of reproducing the ST textual features, including omission, addition, compensation, paraphrasing, explication, and so on.

Equivalence as Criterion

The area of translation quality assessment criteria is academically one “where a more expert writer (a marker of a translation examination or a reviser of a professional translation) addresses a less expert reader (usually a candidate for an examination or a junior professional translator)” (Munday, 2001:30). However, what has long constituted the core and co-current concern of all debates in translation studies is what should be held as the criterion for translation quality assessment.

 Ever since the ancient thematic controversy over “word-for-word” (literal) and “sense-for- sense” (free) translation (ibid.:18-20), the history of translation theory has seen the theme as “emerging again and again with different degrees of emphasis in accordance with differing concepts of language and communication” (Bassnett, 1991:42). Notwithstanding the fact that there is no denying that the issue “what is a good translation?” should be “one of the most important questions to be asked in connection with translation” (House, 2001:127), “[i]t is notoriously difficult to say why, or even whether, something is a good translation” (Halliday, 2001:14).

Throughout translation studies, theorists have attempted to answer this question “on the basis of a theory of translation and translation criticism” from various perspectives (House, 2001:127), and have proposed, apart from the aforementioned opposing binary pair, formal and dynamic equivalence (Nida, 1964), textual equivalence and formal correspondence (Catford, 1965), etc.

 These dichotomies, despite their different perspectives, seem to focus on a consensus in favour of “two basic orientations” (Nida, 1964:159) or types of translation where “the central organizing concept is presumably that of ‘equivalence'” (Halliday, 2001:15). In the English-language scholarship criteria of translation, the concept of (translational) equivalence is “central” but “controversial” (Kenny, 1998:77). According to Koller (1995:197), it “merely means a special relationship—which can be designated as the translation relationship—is apparent between two texts, a source (primary) one and a resultant one.”

It is Jakobson (1959/2000) who first dealt with “the thorny problem of equivalence” (Munday, 2001:36) in translation between the ST and the TT. Following the relation set out by Saussure between the signifier (the spoken and written signal) and the signified (the concept signified), Jakobson (1959/ 2000) perceived “equivalence in difference” as “the cardinal problem of language and the pivotal concern of linguistics” (p.114), which has become a “now-famous… definition” from a linguistic and semiotic perspective (Munday, 2001:37). For him, for the message to be equivalent in the ST and TT, the code-units will be different since they belong to two different sign systems (languages) which partition reality (Jakobson, 1959/2000:114). Specifically, he succinctly pointed out that there is no complete equivalence in the intralingual translation of a word by means of a synonymy, just as “on the level of interlingual translation, there is ordinarily no full equivalence between code-units” (ibid.). This is so because “languages differ essentially in what they must convey and not in what they may convey” (p.116).

Ever since Jakobson’s seminal approach to the concept of equivalence, the question has become a constant theme of translation studies, especially in the 1960s (Munday, 2001:37), and approaches to it “differ radically” (Kenny, 1998: 77): Some theorists define translation in terms of equivalence relations (Catford, 1965; Nida and Taber, 1969; Toury, 1980; Pym, 1992, 1995; Koller, 1995) while others reject the theoretical notion of equivalence, claiming it is either irrelevant (Snell-Hornby, 1988) or damaging (Gentzler, 1993) to translation studies.

Yet other theorists steer a middle course: Baker [(1992:5-6)] uses the notion of equivalence “for the sake of convenience—because most translators are used to it rather than it has any theoretical status.” (Kenny, 1998:77)

Understandably, although the concept has been blatantly labelled by Nord as “a static, result-oriented concept describing a relationship of ‘equal communicative value’ between two texts or, on lower rank, between words, phrases, sentences, syntactic structures and so on (In this context, ‘value’ refers to meaning, stylistic connotations or communicative effect)” (Nord, 1997:36), it is still “variously regarded as a necessary condition for translation, an obstacle to progress in translation studies, or a useful category for describing translations” (Kenny, 1998:77).

This thus explains why the ad hoc criterion and the techniques for achieving it “continues to be used in the everyday language of translation” (Fawcett, 1997:65), even in the applications of register analysis for translation quality assessment as will be presented shortly.

TheTheory of Register

In the Hallidayan (also called Australian) functional theory of language (Hyon, 1996), “analysts are not just interested in what language is, but why language is; not just what language means, but how language means (Leckie-Tarry, 1993:26). Halliday stresses the need for a look into the context in which a text is produced while analyzing and/or interpreting a text. He points out that the really pressing question here is “which kinds of situational factor determined which kinds of selection in the linguistic system?” (Halliday, 1978:32; original emphasis).

Context here relates to the context of situation and context of culture, both of which “get ‘into’ text by influencing the words and structures that text-producers use” (Eggins and Martin, 1997:232). While the former is concerned with the register variables of field, tenor, and mode, the latter is described in terms of genre.

FIRTH AND HALLIDAY

The term “register” first came into general currency in the 1960s (Leckie-Tarry, 1993:28). Following Reid’s initial use of it in 1956, and Ure’s development of it in the 1960s (ibid.), Halliday et al. (1964:77) describe it as “a variety according to use, in the sense that each speaker has a range of varieties and chooses between them at different times.” This use-related framework for the description of language variation (as contrasted with the user-related varieties called dialects) (Hatim and Mason, 1990:39) aims to “uncover the general principles which govern [the variation in situation types], so that we can begin to understand what situational factors determine what linguistic features” (Halliday, 1978:32).

De Beaugrande (1993:7) shows his sympathy for the concept of register when he laments, “Throughout much of linguistic theory and method, the concept of ‘register’ has led a rather shadowy existence.” The term did not make appearance in such foundational works as those of Saussure, Sapir and Bloomfield. This absence is explained by the fact that it “is hard to define” the term as a(n abstract) language unit that might be “comparable, say, to the ‘system’ of ‘phonemes’ of a language, or to its ‘system’ of noun declensions or verb conjugations, and so on” (ibid.).

Register-Based Equivalences

Following Hallidayan linguistics, especially the Australian tradition of genre and register theories (see Ghadessy, 1993; Hyon, 1996), theorists concentrate themselves on (offering) ways to tackle translation equivalence in terms of functional perspectives. Among these, Newmark, Marco, House, teamworkers Hatim and Mason, and Baker deserve mention here.

Newmark is fascinated with Halliday’s (1994) seminal work An Introduction To Functional Grammar, especially with the chapter on the equivalent representations of metaphorical modes of expressions (i.e. “Beyond the clause: metaphorical modes of expressions”). Here, Halliday supplies good examples illustrating how choices are made when representing metaphors. Newmark (1991) recommends this chapter highly, claiming that it “could form a useful part of any translator’s training course where English is the source or target language” (p.68).

Next comes Marco (2001) who contributes to register analysis in the field of translation quality evaluation by specifically justifying the use of register analysis in literary translation. He points out that such a tool “provides the necessary link between a communicative act and the context of situation in which it occurs” (p.1). For him, register analysis is “the most comprehensive framework proposed for the characterisation of context,” and has the advantage of “provid[ing] a very limited number of variables on the basis of which any given context may be defined” (ibid.).

Like Marco, teamworkers Hatim and Mason (1990, 1997) also employ register analysis as part of their overall account of context in translation. Despite their claim that there are other contextual factors, i.e. pragmatic and semiotic ones, which transcend the framework of register, they continue to assume that identifying the register membership of a text is an essential part of discourse processing; it involves the reader in a reconstruction of context through an analysis of what has taken place (field), who has participated (tenor), and what medium has been selected for relaying the message (mode).

Together, the three variables set up a communicative transaction in the sense that they provide the basic conditions for communication to take place. (Hatim and Mason, 1990:55; original emphasis)

Also noteworthy in the application of register analysis for practical translation studies are House (1981, 1997) and Baker (1992) who not only adopt Halliday’s model of register analysis but also develop substantial criteria whereby both the ST and TT can be systematically compared. House (1981) rejects the “more target- audience oriented notion of translation appropriateness” as “far too general and elusive” and “fundamentally misguided” (p.1-2). Instead, she advocates a semantic and pragmatic approach.

Central to her discussion is the concept of “overt” and “covert” translations. In an overt translation like that of a political speech, House asserts, the TT audience is not directly addressed and there is therefore no need at all to attempt to recreate a “second original” since an overt translation “must overtly be a translation” (ibid.:189). By covert translation, on the other hand, she means the production of a text, for instance, a science report, which is functionally equivalent to the ST, and which “is not specifically addressed to a TC (target culture) audience” (ibid.: 194). Significantly, House claims that ST and TT should match one another in function, with function being characterised in terms of the situational dimensions of the ST (ibid.:49).

 Based upon the Hallidayan model of register analysis, she proposes what she calls “the basic requirement for equivalence of ST and TT,” and asserts that “a TT, in order to be equivalent to its ST, should have a function—consisting of an ideational and an interpersonal functional component—which is equivalent to the ST’s function” (House, 1981:Abstract). To measure the degree to which the TT’s ideational and textual functions are equivalent to those of its ST’s, House develops a model (see Figure 1 below) as the scheme for systematic comparison of the textual “profile” of the ST and TT (1997:43) in terms of both functions in question. This schema, though “draw[ing] on various and sometimes complex taxonomies” (Munday, 2001:92), can be reduced to a register analysis of both ST and TT according to their realisation through lexical, syntactic and “textual” means.

 By the last term, House (1997:44-45) refers to:

(1) theme-dynamics (i.e. thematic structure and cohesion),

 (2) clausal linkage (i.e. additive, adversative, etc.),

 (3) iconic linkage (i.e. parallelism of structures).

Baker, on the other hand, albeit using the term equivalence “for the sake of convenience” (1992:5), extends the concept to cover similarities both in ST and TT information flow, and in the cohesive roles ST and TT devices play in their respective texts, both of which she collectively calls “textual equivalence.” She also examines equivalence at a series of levels: at word, above-word, grammatical, and pragmatic levels (Baker, 1992).

As far as House’s model is concerned, although it seems to be much more flexible than that of Catford’s, it sill raises the doubt that whether the model is able to recover authorial intention and ST function from register analysis (Gutt, 1991:46-49). Even if it is possible, it is further argued, the basis of House’s model is to discover “mismatches” between ST and TT (ibid.). Regarding Baker’s framework, she obviously assigns new adjectives to the notion of equivalence (grammatical, pragmatic, textual, etc.), thus adding to the plethora of recent works in this field.

Importantly, by putting together the linguistic and the communicative approach, she offers a fresh, and more detailed list of conditions upon which the concept of equivalence can be defined. Unfortunately, however, she fails to provide an operatable checklist against which degrees of equivalence can be established at the various ranks she proposes. In respect of Hatim and Mason’s studies, “their focus remains linguistics-centred, both in its terminology and in the phenomena investigated (lexical choice, cohesion, transitivity, style shifting, translator mediation, etc.)” (Munday, 2001:102).

THE FILTER OF SIMANTIC

Textual analysis on the semantic level reveals that the genre of medical feature articles exhibits significant instances of denotative and connotative meaning. The most significant denotative meaning is technical meaning. Genre analysis shows that the heavy use of technical terms is discouraged in popular science, but such terms cannot be avoided altogether.

Moreover, publications differ in their use of technical terms:

some, like the Scientific American, allow a heavier dosage of them than the National Geographic, for example. It is worth noting here that these differences indicate a subtle connotative meaning that technical terms express in popular science: the more they are used, the more exclusive the publication is, which in turn reflects on its status in comparison with other publications.

In addition, the stylistic choices regarding technical terms, such as use of borrowed or indigenous forms of the term, academic or popular, a full form of a term or its abbreviation, all have connotative meanings reflecting on the publication, the author and the intended reader. Such considerations should also be weighed by the translator when dealing with the problem of technical

terms.

From the translator’s point of view, technical terms pose three main problems: lexical, conceptual and stylistic. According to Dickins, Hervey, and Higgins (2002: 184-185), lexical problems are of three types:

 (1) technical terms that are unfamiliar to the translator because they are not usually used in everyday language, and therefore require specialist knowledge to understand and render them correctly in the TL;

(2) everyday familiar terms that are used in a specialized way; and

 (3) familiar terms

which, while their use is specialized, also make sense in a way that is not obviously wrong in the context, thus posing a risk of not being recognized by the translator. Types 2 and 3 are also called sub-technical terms by Trimble (1985: 129).

FORMAL Filter

The formal filter includes the following levels identified by( Dickins, Hervey & Higgins2002:79): the phonic/graphic, prosodic, grammatical, sentential, discourse and intertextual. Since the formal filter requires detailed analysis of all sentences and paragraphs in the text.

VARIEATAL Filter

The most significant variable to be discussed in relation to the varietals filter in PSFAs is register. Variations in register with relation to technical terms and genre have already been discussed above.

As for the social register, popular science articles are typically characterized by a neutral style, which is successfully reproduced in the TTs. The production of tonal register in the TTs, however, involves some inevitable translation loss as well as some unnecessary loss. PSFAs in general tend to be less formal than academic papers in scientific journals, and complex or unfamiliar technical terms or concepts are often explained. Due to an intrinsic formality in Standard Arabic, a translation loss on this level is inevitable.

REFERENCES

Foltz, P. W. Latent semantic analysis for text-based research. Behavior Research

Methods, Instruments and Computers, 28/2. 1996. p.197-202.

House, J. A model for translation quality assessment. 2.ed. (1.ed., 1977) Tübingen:

Narr, 1981.

_____. Quality of translation. In M. Baker (ed.). Routledge encyclopedia of

Translation Studies. 2.reimpr. (1.ed. 1998). London: Routledge. 2000. p.197-

200;

Hymes, D. Why linguistics needs the sociologist. Social Research, 34/2. 1967.

p.634-647.

Jakobsen, A.L. & Schou, L. Translog documentation; version 1.0. In G. Hansen

(ed.). Probing the process of translation: methods and results. Copenhagen:

Samfundslitteratur. – (Appendix). 1999.

Jensen, A. Time pressure in translation. In G. Hansen (ed.). Probing the process

of translation: methods and results. Copenhagen: Samfundslitteratur. 1999. p.103-

119.

Königs, F. G. Wie theoretisch muß die Übersetzungswissenschaft sein? Gedanken

zum Theorie-Praxis-Problem. Taller de Letras 18. 1990. p.103-120.

Li, D. Tailoring translation programs to social needs: a survey of professional

translators. Target, 12/1. 2000. p.127-149.

Militão, J. A. A significação metafórica e o processo de

Abu-Ssaydeh, Abdul-Fattah (2004): “Translation of English Idioms into Arabic.” Babel 50 [2]:

114-131

Baker, Mona (1992): In Other Words. A Coursebook on Translation. London: Routledge

Bassnett-McGuire, Susan (1991): Translation Studies. Revised ed. London: Routledge

Bhatia, Vijay Kumar (1993): Analysing Genre: Language Use in Professional Settings. London: Longman

Dickins, James; Sandor Hervey, Ian Higgins (2002): Thinking Arabic Translation. A Course in Translation Method: Arabic to English. London: Routledge Fraser, Janet (1996): “The Translator Investigated.” The Translator 2 [1]: 65-79

Full History. (2003). – http://www.sciam.com/page.cfm?section=history (19 December 2003)

Firth, John R. 1957. Papers in Linguistics 1934-1951. London: Oxford. — 1968. Selected Papers of J.R. Firth 1952-1959. London: Longman.

Gentzler, Edwin. 1993. Contemporary Translation Theories. London: Routledge.

Ghadesy, Mohsen. Ed. 1993. Register Analysis: Theory and Practice. London: Pinter Publishers.

Gregory, Michael. 2001. “What can linguistics learn from translation?’ In Erich Steiner and Colin Yallop, 2001a, pp.19-40.

Gutt, E. 1991. Translation and Relevance: Cognition and Context. Oxford: Blackwell.

Halliday, M.A.K.1964. “Comparison and translation.” In M.A.K. Halliday, M. McIntosh and P. Strevens. The linguistic sciences and language teaching, London: Longman.