Sunday, October 14, 2012

Expansion on the Recent Discoveries Concerning Nitric Oxide

Expansion on the Recent Discoveries Concerning Nitric Oxide
as presented by Dr. Jack R. Lancaster
Nitric Oxide, or NO, its chemical representation, was until recently not considered to be of any benefit to the life processes of animals, much less human beings. However, studies have proven that this simple compound had an abundance of uses in the body, ranging from the nervous system to the reproductive system. Its many uses are still being explored, and it is hoped that it can play an active role in the cures for certain types of cancers and tumors that form in the brain and other parts of the body.
Nitric Oxide is not to be confused with nitrous oxide, the latter of which is commonly known as laughing gas. Nitric oxide has one more electron than the anesthetic. NO is not soluble in water. It is a clear gas. When NO is exposed to air, it mixes with oxygen, yielding nitrogen IV dioxide, a brown gas which is soluble in water. These are just a few of the chemical properties of nitric oxide.
With the total life expectancy of nitric oxide being from six to ten seconds, it is not surprising that it has not been until recently that it was discovered in the body. The compound is quickly converted into nitrates and nitrites by oxygen and water. Yet even its short-lived life, it has found many functions within the body. Nitric oxide enables white blood cells to kill tumor cells and bacteria, and it allows neurotransmitters to dilate blood vessels. It also serves as a messenger for neurons, like a neurotransmitter. The compound is also accountable for penile erections. Further experiments may lead to its use in memory research and for the treatment of certain neurodegenerative disorders.
One of the most exciting discoveries of nitric oxide involves its function in the brain. It was first discovered that nitric oxide played a role in the nervous system in 1982. Small amounts of it prove useful in the opening of calcium ion channels (with glutamate, an excitatory neurotransmitter) sending a strong excitatory impulse. However, in larger amounts, its effects are quite harmful. The channels are forced to fire more rapidly, which can kill the cells. This is the cause of most strokes.
To find where nitric oxide is found in the brain, scientists used a purification method from a tissue sample of the brain. One scientist discovered that the synthesis of nitric oxide required the presence of calcium, which often acts by binding to a ubiquitous cofactor called calmodulin. A small amount of calmodulin is added to the enzyme preparations, and immediately there is an enhancement in enzyme activity. Recognition of the association between nitric oxide, calcium an calmodulin leads to further purification of the enzyme. When glutamate moves the calcium into cells, the calcium ions bind to calmodulin and activate nitric oxide synthase, all of these activities happening within a few thousandths of a second. After this purification is made, antibodies can be made against it, and nitric oxide can be traced in the rest of the brain and other parts of the body.
The synthase containing nitric oxide can be found only in small populations of neurons, mostly in the hypothalamus part of the brain. The hypothalamus is the controller of enzyme secretion, and controls the release of the hormones vasopressin and oxytocin. In the adrenal gland, the nitric oxide synthase is highly concentrated in a web of neurons that stimulate adrenal cells to release adrenaline. It is also found in the intestine, cerebral cortex, and in the endothelial layer of blood vessels, yet to a smaller degree.
Although the location of nitric oxide was found by this experimentation, it wasn't until later that the function of the nitric oxide was studied. Its tie to other closely related neurons did shed some light on this. In Huntington's disease up to ninety-five percent of neurons in an area called the caudate nucleus degenerate, but no daphorase neurons are lost. In heart strokes and in some brain regions in which there is involvement of Alzheimer's disease, diaphorase neurons are similarly resistant. Neurotoxic destruction of neurons in culture can kill ninety percent of neurons, whereas diaphorase neurons remain completely unharmed.
Scientists studied the perplexity of this issue. Discerning the overlap between diaphorase neurons and cerebral neurons containing nitric oxide synthase was a good start to their goal. First of all, it was clear that there was something about nitric oxide synthesis that makes neurons resist neurotoxec damage. Yet, NO was the result of glutamate activity, which also led to neurotoxicity. The question aroused here is, how could it go both ways?
One supported theory is that in the presence of high levels of glutamate, nitric oxide-producing neurons behave like macrophages, releasing lethal amounts of nitric oxide. It is then assumed that inhibitors of nitric oxide synthase prevent the neurotoxicity. The neurotoxicity of cerebral cortical neurons were studied to test this theory. NMDA is added to the cultures from the brain cells of rats. One day after being exposed to the NMDA for only five minutes, up to ninety percent of the neurons were dead. This reveals the neurotoxicity that occurs in vascular strokes.
It is found through these experiments that nitroarginine, which is a very powerful and selective inhibitor of nitric oxide synthase, completely prevents the neurotoxicity given from the NMDA. Removing the arginine from the mixture protects the cells. Also, homoglobin, which binds with and inactivates nitric oxide, also acts as an inhibitor to the harmful effects of neurotoxicity.
The findings of these experiments led to further tests with a direct exposure of lab rats to the nitric oxide synthase. Because NMDA antoagonists can block the damage caused from the glutamate associated with heart strokes, it is questioned whether nitric oxide has the ability to modulate the destruction caused by the stroke. In an experiment performed by Bernard Scatton in Paris, lab rats were injected with small doses of nitroarginine immediately after initiating a stroke on the rats. The nitroarginine reduced stroke damage by seventy-three percent. This fantastic find proves that there is hope in the evolution and search for cures for vascular strokes.
Nitric oxide may also be involved in memory and learning. Memory involves long-term increases or decreases in transmission across certain synapses after the repetitive stimulation of neurons. They then can detect persistent increases or decreases in synaptic transmission. The role of nitric oxide synthase in these processes. The effects of nitric oxide synthase inhibitors were studied in hippocampus, which is the area of the brain that controls the memory. Due to its many influences, however, further study is needed to determine exactly what role nitric oxide plays in the memory.
Scientists have high hopes for the further investigations of nitric oxide. More experiments lead to greater knowledge, and the effects of this knowledge are receiving a warm reception in this day and age of medicine. The knowledge gained by the study of nitric oxide is hoped to lead to cures and better fighting agents for cancers, tumors, strokes, memory loss, as well as other brain diseases, sensory deprivation, intestinal activity, and various other biological conditions that are affected by neurotransmission. It is amazing already the breakthroughs that have surfaced within the past six years concerning the study of nitric oxide, and its further study is excitedly under way.

Evolution of Jet Engines

The Evolution of Jet Engines

The jet engine is a complex propulsion device which draws in air by means of an intake,

compresses it, heats it by means of an internal combustion engine, which when expelled it turns a

turbine to produce thrust, resulting in a force sufficient enough to propell the aircraft in

the opposite direction (Morgan 67). When the jet engine was thought of back in the 1920's the

world never thought it would become a reality, but by 1941 the first successful jet flight was

flown in England. Since then the types of engines have changed, but the basic principals have

remained the same.



In 1921 thoughts of a jet engine were based upon adaptations of piston engines and were

usually very heavy and complicated. These thoughts were refined in the 1930's when the turbine

engine design lead to the patent of the turbojet engine by Sir Frank Whittle of Great Britian. It

was Sir Whittle's design that lead Great Britian into the jet age with the first successful flight. At

the same time, the Germans were designing there own jet engine and aircraft which would be one

of the factors that kept Germany alive in World War II. With technological advances by the allies











a prototype turbojet known as the "Heinkel He 178" came into a few operational squadrons in the

German, British, and the American air forces towards the end of World War II. These jets finally

helped the allies to win the war against the axis powers(Smith 23-27).



A later development in the jet industry was the overcoming of the sound barrier and

establishing normal operations up to and beyond twice the speed of sound. Also air force

bombers and transports were able to reach and cruise at supersonic speeds(Silverstein 56-70). In

the late 1950's civil transcontinental jet services started with the Comet 4 and the Boeing 707. In

the mid 1960's all major jet manufacturing companies revised their present engines with new

materials such as aircraft aluminium which made them lighter and turbine changes so they could

compress the air at a much higher pressure so the engine can produce much more thrust.The first

supersonic airliner is the twin turbojet Concorde which flies at over twice the speed of sound

which was brought into regular service in 1976(Smith 27-30). The one company that dominates

the private jet industry is Bombardier which makes the Learjet turbofans, they have an

approximate cruising distance of 1880 nautical miles(Jennings 103).



In the future, turbojet engines will continue to further develop due to the technological

advances made. As in graphite composite wings, thermoplastic chassis, and kevlar skins that have

changed the weight of modern planes and gliders. With these and other developments jet engines

will be honed to produce greater thrust without increases in weight or size. Which will involve










small refinements rather than major changes to the existing engine and engine compartment. In

the near future there will be a substantial reduction of noise emitted from the jet engine, due to a

change in materials and a reduction of vibration in the housing. Right now the jet industry has

over one thousand jets operational at one time, which poses the threat of malfunction and crashes.

With the new computer analysis of problems and the new materials found in the internals of the

engines, there is less of a risk of malfunction than in the past.



Many factors have lead to the popular take over of the jet, replacing the traditional

propeller driven planes. Some of the basic reasons are the speed, fuel economy, and endurance of

the jet engine oveer piston-driven engines. Together with the new refinements and the currently

changing jet industry, future transportation will become faster and safer for the flier.

evaluating an enthalpy change that can not be measured direct

Chemistry Experiment.
Dr. Watson.

Evaluating An Enthalpy Change That Cannot
Be Measured Directly.

Introduction.

We were told that sodium hydrogencarbonate decomposes on heating to give sodium
carbonate, water and carbon dioxide as shown in the equation below:-
2NaHCO3(s)--------> Na2CO3 (s) + H2O (l) + CO2 (g) = DeltaH1

This was given as deltaH1 and we had to calculate as part of the experiment.
This however cannot be measured directly, but can be found using the enthalpy
changes from two other reactions. These being that of sodium hydrogencarbonate and
hydrochloric acid and also sodium carbonate and hydrochloric acid.

We were given a list of instructions in how to carry out the experiment, which are
given later.

List of Apparatus Used.

1 x 500ml Beaker.
1 x Thermometer(-10 to 50oC).
1 x Polystyrene Cup.
1 x Weighing Balance.
1 x Weighing Bottle.
10 grams of Sodium Hydrogencarbonate.
10 grams of Sodium Carbonate.
A bottle of 2 molar HCL.

Diagram.







Method.

Three grams of sodium hydrogen carbonate was weighted out accurately using a
weighting bottle and a balance. Then thirty centimetres cubed of 2 molar HCL was
measured using a measuring cylinder. The acid was then placed into the polystyrene
cup and its temperature was taken and recorded using the thermometer. The pre-
weighted sodium hydrogencarbonate was then added to the solution, and the final
temperature was recorded.

The contents of the cup were then emptied out and the cup was washed out with
water and then thoroughly dried. This was done three times for the sodium hydrogen
carbonate so that I could remove any anomalies that were obtained.

The experiment was then repeated in exactly the same manner except sodium
carbonate was used instead of sodium hydrogen carbonate.

The results were then tabulated, this table is shown below.


Results Table.
Results Table for Sodium Hydrogencarbonate.



Results Table for Sodium Carbonate.





Calculations.

From these results I had to calculate deltaH2 and deltaH3. DeltaH2 refers to the
enthalpy change when sodium hydrogencarbonate reacts with hydrochloric acid, and
deltaH3 is the enthalpy change when the sodium carbonate reacts with the acid.

Firstly however it is necessary to show the equations for the two reactions:-
DeltaH2= 2NaHCO3 (s) + 2HCl (aq)----> 2NaCl (aq) + 2H2O (l) + 2CO2 (g).

DeltaH3= Na2CO3 + 2HCl (aq) ----> 2NaCl (aq) + H2O (l) + CO2 (g)

The enthalpy changes of the two reactions can be worked out using the formula
shown below :-

Energy Exchanged between = Specific Heat Capacity x Mass of the x Temperature
Reactants and Surroundings of the Solution Solution Change.

Therefore the DeltaH2 of the reaction when fitted into the formula is :-

Energy Exchanged between = 4.18 x (84 x 2) x -11.1
Reactants and Surroundings.

This gives the enthalpy change for DeltaH2 to be = -7794.9 Joules per mole.

The same formula is used for DeltaH3:-

Energy Exchanged Between = 4.18 x 106 x 21.8
Reactants and Surroundings.

This gives the Enthalpy change for DeltaH3 to be = 9659.1 Joules per mole.

From these two results we are able to work out what DeltaH1 is likely to be even
though we have not done the experiment. This is done using the formula :-
DeltaH1 = DeltaH3 + DeltaH2 =>
DeltaH1 = 9659.1 + (-7794.9) =>
DeltaH1 = 1864.2 Joules per mole.

Conclusions.

The result obtained will not be a very accurate due to the means by which the
experiment was done. The equipment used was not the most efficient for measuring
enthalpy changes, however it does give a rough estimate to work from. Some errors
of the equipment would have been heat lost through conduction from the reaction
vessel. Also heat may well have been lost through the open top of the container, even
though there was a lid this was not very secure some heat will have escaped through
here.

In summation the experiment was very difficult to undertake as the enthalpy change
for DeltaH1 is hard to determine due to the fact that it thermally decomposes in the
air, causing great problems in calculating its enthalpy change with its surroundings.

Ethical Procedures and Guidelines Defining Pschycological Res

Psychological research is often a very controversial subject among experts. Many people feel that there are many moral standards that are often not followed. Others may believe that there is much harmful misinformation that can often be harmful to subject and others. Still others believe that psychology is a lot of theories without any reinforcing information. Whether any of these assumptions may be true or not, there have been guidelines created which serve to silence many critics of the science. These guidelines make research safe and structured, which will protect the subjects from unnecessary harm.
As psychology advances, there is seen a need for more rules and regulations for the ensurement of subject comfort. Hence, there are many more rules now than even twenty years ago. These rules really encompass a few broad but very important ideas. One of these ideas is protecting the dignity of the subjects. Another important component of this code refers to consent. All of these will be explained in greater detail below. Another gray area in psychology lies in the deception of subjects. There are some basic rules guiding how deceptions can be carried out. There is a large section of the code that was made with regards to animal research. The last major section of the ASA ethical guidelines has to do with giving credit where credit is due, and information sources. All of these regulations make research safer for the subjects and increase the effectively of psychological research.
In psychological research, protecting subjects dignity is very important. Without willing subjects the research process would be brought to a halt. In order to protect the subjects' dignity, the lab experiments must be well prepared, and ethically appropriate. Only subjects who are targeted should be affected, and if a large number of people are to be affected, psychologists should consult experts on that specific group. Psychologists are to be held directly responsible for the ethics that are utilized during the experiment. In addition to this psychologists are bound by the normal, governmental laws concerning research. In addition to these regulations concerning the law and standards, psychologists are required to inform subjects of the basic procedure that they will be agreeing to. This flows into the idea of informed consent.
Informed consent means basically that the subject must be informed of the basic procedure that they will be agreeing to. There should not be any variations from the agreed upon plan. Whenever there is a doubt about whether or not informed consent is necessary , an institution or expert in the area of the subjects should be consulted. One complicating factor in this sector is deception in research. In order to conduct certain experiments, it is helpful to psychologists to deceive the participants, with respect to exactly which experiment is being performed upon them. The rules concerning this are effective, but (necessarily) rather vague. First of all, psychologists are never supposed to use deception unless no other alternative of method for the experiment at hand is available. The deception cannot be in a manner that would affect the participants' decision to participate. And any deception that takes place should always be explained as soon as possible, after the experiment has reached its conclusion.
In order to preserve subjects dignity, the information about the experiment that the subjects have participated in should be made available to the subjects as soon as possible. This includes, the exact nature of the experiment, the results , and the conclusions of the experiment. This will probably have been already agreed upon by the experimenter and the subject, but just in case, the experimenters are required to honor all commitments made to the subject. This improves the credibility of the whole science, as a whole.
When the subjects are not human, there are still rules governing the treatment of such subjects. These pertain mostly to protecting the (relative) comfort of these subjects during experimentation. Basically, when experimenting upon animals, basic care procedures must be followed. When anesthetic or euthanasical procedures are to be used, they must be carried out in a fashion that will be both professional and comfortable to the subjects. Obviously, the procedures that can be carried out upon animals are more drastic than those on humans because there is no informed consent involved in the study of animals, and the procedures can be justified because the results are purportedly supposed to assist in the betterment of the human race.
The last area of ASA code lies in reporting information. The natural plagiarism laws are, as always, in effect. This is in addition to many precise scientific falsification laws. These state that a scientist may not falsify or fabricate information, first of all. Also, if a psychologist discovers any significant errors in the study after the fact, steps to correct these errors must be taken immediately. Also, the psychologists must give credit when it is necessary, and never neglect to leave any information out.

All of these regulations seem to be very logical, and it is well that they should. They have been developed over hundreds of years throughout the study of psychology. With respect to current times, these rules seem like they are sufficient, but the book of code should never be closed. There will always be a new situation where a new addendum is required to protect a subject, or to assist in the research. As is the case with therapy, there will, without a doubt, be court cases that will change the code of ethics. But the ASA codes seem to be as proficient as any that are practical in this age. Some of these regulations may inhibit the immediate results that can be gained, but without them, there would be a definite lack of willing participants to volunteer. This would essentially bring psychological research upon humans to a halt.

El petroleo

EL PETRÓLEO Y SUS DERIVADOS


DERIVADOS PLÁSTICOSMEDICINASPERFUMESTELASDETERGENTES
EL PETRÓLEO
COMPOSICIÓN QUÍMICA MEZCLA DE HIDROCARBUROS, ALCANOS, ALQUENOS, NAFTENOS Y AROMÁTICOS


El petróleo es un líquido oleoso, mas ligero que el agua, de color obscuro y olor fuerte, que se encuentra nativo, formando a veces grandes manantiales en los estratos superiores de la corteza terrestre; es una mezcla de hidrocarburos (Alcanbos, Alquenos, Naftenos y aromáticos), es insoluble en el agua, arde con facilidad y, sometido a una destilación fraccionada da una gran cantidad de productos volátiles.

Fue conocido en la antigüedad; los calderos usaban el asfalto del petróleo como mortero en sus construcciones y los egipcios lo utilizaban para embalsamar a los muertos; pero no adquirió importancia comercial hasta principios del siglo XIX con el principio de grandes yacimientos norteamericanos.


Se le supone formado a partir de la descomposición de la materia orgánica en determinadas condiciones de presión, temperatura, etc. Su composición varía mucho según su procedencia (petróleos linfáticos, petróleos naftenicos, petróleos ricos e hidrocarburos aromáticos.)

La capitación de petróleo se efectúa en gralte mediante perforaciones en el terreno. En las proximidades de los yacimientos se construyen depósitos para recoger el líquido, que luego se conducen por tuberías y oleoductos a las refinerías o a los puertos.

Para sus diferentes usos es necesario someter el petróleo bruto a la destilación fraccionada y subsiguiente rectificación de las fracciones obtenidas, con lo que se separan así los siguientes productos:

· Gases o líquidos muy fácilmente volátiles, que a veces se aprovechan como combustibles en la misma operación
· Gasolinas bajas o petróleo ligero, que hierve por debajo de 150°
· Petróleo de quemar, porción que destila entre 150 y 170°
· Aceites pesados de gas-oil y sólidos, de punto de ebullición superior a los 350°, que quedan en la caldera.

Estas ultimas fracciones se pueden transformar en las primeras por el proceso de cracking, que consiste en citar a alta temperatura y presión una sustancia macromolecular, hasta conseguir la división de sus moléculas en otras mas sencillas. Así se obtienen gasolinas de punto de ebullición superior a los 300°.


La industria del petróleo ha experimentado un gran desarrollo y por ello se ha intensificado la búsqueda de petróleo en el mar, principalmente en las proximidades del Golfo de México y en el Mar del Norte.

Se considera la fuente actual para la obtención de productos plásticos, caucho sintético, fibras, detergentes sintéticos y numerosos aditivos que se utilizan en la industria de los aceites, incluyendo el plomotetraetilino y diversos antioxidantes.


El petróleo puede llegar a revolucionar la industria. En la alimenticia, por ejemplo, se están realizando diversos trabajos científicos para obtener alimentos a partir de cultivos de microorganismos en el petróleo.

El enorme consumo actual de petróleo hace prever en un futuro relativamente cercano el agotamiento de los yacimientos hoy conocidos que son indudablemente la gran mayoría de los existentes en la superficie terrestre.





Mapa señalando algunos yacimientos petrolíferos.


PAÍSES PRODUCTORES DE PETRÓLEO · U.R.S.S. · ARABIA SAUDITA· E.U.A.· IRAK· IRÁN· KUWAIT· VENEZUELA· NIGERIA· R.P.CHINA· LIBIA

Dépistage des maladies thyroïdiennes

TABLE DES MATIÈRES
I. INTRODUCTION 1
A. RÔLE PHYSIOLOGIQUE DES HORMONES THYROÏDIENNES 1
B. BIOSYNTHÈSE DES HORMONES THYROÏDIENNES 1
C. MÉTHODES DIAGNOSTICS 3
D. SIGNES ET SYMPTÔMES 3
E. TESTS THYROÏDIENS 4
F. TESTS MORPHOLOGIQUES DE LA GLANDE THYROÏDE 6
G. TRAITEMENTS 6
II. CONCLUSION 7
III. RÉFÉRENCES 8

INTRODUCTION

La glande thyroïde joue un rôle essentiel dans le contrôle du métabolisme général et en particulier celui des glucides. Cette glande est située sur la face centrale du cou, juste au-dessous du larynx. Arrangées en forme de sacs, les cellules de la thyroïde sécrètent plusieurs hormones principalement la thyroxine et la calcitonine. En effet, la thyroïde assure la synthèse de ces hormones et les entrepose dans le colloïde. Normalement, la thyroïde libère lentement ces hormones dans le système sanguin ou bien elle les met en réserve pour une durée d'au moins 100 jours.
Rôle physiologique des hormones thyroïdiennes
Les fonctions principales de hormones thyroïdiennes chez l'homme s'agit d'être la synthèse des protéines et le métabolisme énergétique. Cependant, ces hormones sont aussi impliquées dans plusieurs autres activités physiologiques telles que les suivantes. Elles provoquent une augmentation de la lipolyse tout en abaissant le taux de cholestérol chez la personne. Elles favorisent la croissance en agissant sur les chondrocytes situés dans les os. Elles interviennent au niveau de la thermorégulation et elles peuvent accélérer le rythme cardiaque. Elles accélèrent l'absorption intestinale des hydrates de carbone tout en augmentant le catabolisme des glucides (glycogénolyse). Elles sont impliquées dans la phase de décontraction des muscles. Finalement, elles sont capables d'augmenter la diurèse et l'élimination urinaire et fécale du calcium (Matte, R., Bélanger, R., 1985).
Biosynthèse des hormones thyroïdiennes
Le système principal qui contrôle la concentration de thyroxine dans le sang est exercé par la thyréostimuline (TSH) qui provient de l'adénohypophyse. La sécrétion de TSH est donc responsable d'une rétroaction négative qui est assurée par la concentration de thyroxine dans le sang. Un deuxième système est géré par un neurohormone appelé l'hormone de libération de la thyréostimuline (TRH). Aussitôt qu'il y a une diminution de thyroxine dans le sang, il s'ensuit d'une sécrétion de TSH et de TRH. Une fois que la TSH atteint la glande thyroïde, elle entraîne une libération d'hormones thyroïdiennes. (Matte, R., Bélanger, R., 1985).

Généralement la synthèse des hormones thyroïdiennes se fait en quatres étapes. Premièrement, l'iode qui provient des aliments et des liquides qu'on ingère, est capté par la glande thyroïde. Deuxièmement, l'iode devient oxydé et organifié afin d'être incorporé avec la thyroglobuline pour former de la monoiodotyrosine (MIT) et de la diiodotyrosine (DIT). Troisièmement, ces iodotyroisines se font oxidés pour former de la T4 et la T3 qui pourront ensuite être emmaganisé dans la colloïde avant leurs sécrétion. La sécrétion de ces hormones constitue la quatrième étape. (Matte, R., Bélanger, R., 1985).

Pour assurer cette synthèse hormonale, l'homme a besoin d'un apport minimal d'environ 50 à 200 ug/jour. L'iode qui provient des aliments tels que les fruits de mer et des boissons sont absorbés par l'intestion sous forme d'iodures. "Sur les 25 mg d'I du corps humains, on en retrouve 30 à 50% dans la glande thyroïde (concentration près de 1300 fois supérieur à celle des autres tissus), soit de 9 à 12 mg" (Idleman, S., 1990, p.75). Dans le sang, la concentration d'iode est d'ordre de 6 à 12 ug/100 ml et parmi cela, 1 ug/ml constitue de l'iode organique, 5 à 7 ug/ml correspond aux MIT et DIT et 95% est associé au T4 qui est lié à l'alpha-globuline (TBG - thyroxin binding globulin).
Méthodes diagnostics
La compréhension des systèmes qui contrôlent la sécrétion des hormones thyroïdiennes facilite l'établissement d'un diagnostic d'hyposécrétion (hypothyroïdisme) ou d'hypersécrétion (hyperthyroïdisme) de cette glande. Cette glande exerce ainsi une influence déterminante sur la croissance, ce qui est surtout évident lorsqu'une insuffisance thyroïdienne apparaît tôt dans la vie. En plus de l'arrêt de la croissance corporelle, on remarque aussi des malformations au niveau du visage et des cellules du cerveau qu'on appele le crétinisme. Cette maladie est souvent caractérisée par une arriération mentale chez l'individu. Bien qu'il existe plusieurs complications qui sont associées à ces troubles, nous allons maintenant examiner en plus de détails les signes et symptômes reliés à une hypothyroïdie et à une hyperthyroïdie. (Matte, R., Bélanger, R., 1985).
Signes et symptômes
Les signes et symptômes caractéristiques de l'hypothyroïdie sont les suivants: la fatigue, la peau sèche, la constipation, des troubles menstruels, des crampes musculaires, un gain de poids, la bradycardie, le coeur dilaté et flasque, l'infertilité, la galactorrhée, le syndrome de tunnel carpien, l'apathie et l'anémie. De l'autre côté, les signes et symptômes de l'hyperthyroïdie sont les suivants: la tachycardie, la fatigue, la nervosité, les tremblements, l'ostéoporose, du goître, l'intolérance à la chaleur, l'ophtalmopathie, la polyphagie, la psychose, l'onycholyse, la décompensation cardiaque et l'hépato-splénomégalie. (Idleman, S., 1990).

En clinique, les maladies thyroïdiennes se présentent parfois par des principaux signes et symptômes qui sont généralement dépistés lors d'un examen médical. La découverte d'une nodule anormale lors d'un examen clinique ou lors d'un test biochimique sont des exemples courants. Le médecin possède donc deux moyens pour dépister des troubles au niveau de la glande thyroïde: l'examen clinique et les tests thyroïdiens.

Pour déterminer la morphologie de la glande, le médecin se base sur l'inspection et la palpation. Ces méthodes permettent de définir la forme, le volume et la constance de la glande en question. (Idleman, S., 1990, p.75).
Tests thyroïdiens
Les tests thyroïdiens permettent de confirmer la présence de la pathologie d'hyperthyroïdie ou d'hypothyroïdie chez le client. Il existe plusieurs tests qui évaluent le fonctionnement et la morphologie de la glande thyroïde. Les tests qui déterminent le fonctionnement sont: le dosage de la T4, T3 et TSH plasmatique, la mesure du taux de saturation de la TBG et le dosage de T4 libre (ITL ou FT4I). Généralement la variation des taux sériques de T4, T3, TBG, TSH et T4 libre chez la personne normale sont les suivants: T4 - 4.5 à 11.5 ug/100ml, T3 - 90 à 200 ng/100ml, TBG - 25 à 35%, TSH - 0 à 6 uU/ml, T4 libre - 0.7 à 1.8 ng/100ml. (Matte, R., Bélanger, R., 1985).

Dans la mesure du taux de saturation de TBG, on administre des T3 radioactifs dans un échantillon de sang du client. Le TBG du sérum et une résine sont mis en compétition avec le T3 radioactif. Le pourcentage de captation par la résine est indicatif du nombre de sites libres sur la TBG. En autres mots, des résultats de T3 radioactifs élevés indiquent soit une hyperthyroïdie ou une diminution de la TBG. Un taux faible de T3 radioactifs indiquent soit une hypothyroïdie ou une élévation de la TBG.
Pour déterminer le dosage de T4 libre (FT4I) d'un client, on utilise généralement la formule mathématique suivante: (T U = radioactif)

Hyperthyroïde: T4 ­ % T3 U (patient) = FT4I
T U (normal)

Hypothyroïdie: T4 ¯ % T3 U (patient) = FT4I
T U (normal)

Pilule T4 ­ % T3 U (patient) = FT4I
anticonceptionnelle: T U (normal)

Syndrome T4 ¯ % T3 U (patient) = FT4I
néphrotique: T U (normal)

Un autre test est celui de la captation d'I131. Ce test vérifie l'état de captation de la glande thyroïde lorsqu'on administre une dose traceuse d'I131 à un client. Simplement, une captation élevée indique de l'hyperthyroïdie alors qu'une captation diminuée indique de l'hypothyroïdie. (Matte, R., Bélanger, R., 1985).

Le test de TRH est une autre mesure qui permet de dépister une maladie thyroïdienne. L'administration par voie intraveineuse de TRH provoque généralement une augmentation de la TSH sérique. Dans le cas de l'hyperthyroïdie, cette réponse est nulle alors que dans l'hypothyroïdie, cette réponse est exagérée.

Le dosage du taux de cholestérol chez le client est aussi utile pour évaluer la sorte de maladie thyroïdienne. Généralement, il est élevé dans l'hypothyroïdie et diminué dans l'hyperthyroïdie. La mesure de certains enzymes tels que la CPK et les phosphatases alcalines aide à formuler un diagnostic. Souvent les CPK sont élevés dans l'hypothyroïdie et vice versa. Les phosphatases alcalines sont souvent élevées dans l'hyperthyroïdie et diminuées dans l'hypothyroïdie. De plus, on retrouve aussi que la période de relaxation des réflexes tendineux est surtout prolongée dans l'hypothyroïdie mais ce phénomène se présent dans autres conditions telles que la diabète, l'oèdeme et les vasculopathies et ne prouve pas de façon exacte une maladie thyroïdienne. (Matte, R., Bélanger, R., 1985).
Tests morphologiques de la glande thyroïde
Il existe plusieurs manoeuvres qui permettent d'évaluer la morphologie de la glande thyroïde. La cartographie est un test très populaire qui établit les caractéristiques de la glande telle que la goître, les métastases et les cancers. L'échographie permet d'identifier des masses comme les kystes. Cette méthode et aussi très utilisée car elle est simple et non invasive. La ponction et la biopsie s'agit d'être très efficace à distinguer entre un solide et un kyste. L'aspiration du kyste permet une évaluation qui est fait lorsqu'on interprète les lames dans le laboratoire. On peut aussi mesurer le dosage d'anticorps antithyroïdiens dans les cas de la thyroïdite d'Hashimoto et dans la maladie de Graves. Pourtant pour déterminer les cancers thyroïdiens, on fait le dosage de la thyroglobuline au niveau de la thyroïde détruite. (Matte, R., Bélanger, R., 1985).
Traitements

Il existe trois formes générales de traitement qui sont disponibles pour les clients atteintent par l'hyperthyroïdie. Premièrement, ces personnes peuvent subir une thyroïdectomie par la chirurgie. Deuxièmement, on peut traiter cette maladie en administrant de l'iode radioactive aux patients à chaque jour jusqu'à ce que certains tissus thyroïdiens sont détruit. Troisièmement, on peut administrer des médicaments antithyroïdides telles que la propylthiouracil ou la methimazole, qui inhibent la production des hormones thyroïdiennes.

Les cas d'hypothyroïdie sont généralement traités par l'administration d'hormones thyroïdiennes par voie orale. L'hormone thyroïdienne la plus populaire est le T4 synthétisé appelé levothyroxine (SYNTHROID, LEVOTHROID, LEVOXIL) à environ 0.15 mg/jour. (Matte, R., Bélanger, R., 1985).

CONCLUSION

Les patients atteintent d'une hypothyroïdie ou d'une hyperthyroïdie doivent généralement se soumettrent à de nombreuses épreuves diagnostiques pour établir les causes de la maladie et le traitement médical ou chirurgical. Il existe plusieurs médicaments et interventions qui diminuent le taux récidive de ces maladies cependant il est primordial de signaler n'importe quel signes et symptômes avant que cette condition progresse de façon néfaste.

RÉFÉRENCES

Idleman, Simon, (1990). Endocrinologie: Fondements physiologiques. France: Presses Universitaires de Grenoble.

Matte, R., Bélanger, R., (1985). Endocrinologie. Montréal: Les Presses Universitaires de Montréal.

Rosenzweig, M. R., Leiman, A. L., (1991). Psychophysiologie: 2ième Édition. Québec: Décarie Éditions Inc.

UNIVERSITÉ LAURENTIENNE






Projet de Chimie:
DÉPISTAGE DES MALADIES
THYROÏDIENNES





Par: Luc Gervais







Présenté à Dr. Vasu Apanna







Dans le cadre du cours:
CHMI 2220 FA







Date de remise:
Le 4 mars, 1997

Do Cleaning Chemicals Clean as Well After they have been froz

Problem:
The researcher is trying to determine whether or not cleaning materials will clean as well if they have been frozen solid and subsequently thawed out until they have returned to a liquid state of matter.
The researcher will use Dial Antibacterial Kitchen Cleaner, Clorox Bleach, and Parson's Ammonia, applied to simple bacon grease, to determine which chemical is least affected by the glaciation.

Hypothesis:
The researcher feels that the process of glaciation will degrade the ability of these three household cleaning chemicals to breakdown the most common kitchen cleaning problem - grease.
For example, the freezing, thawing, and then freezing again of ice cream puts the substance through the freezing process. The result is a separation of heavy and light substances which breaks down the food. The researcher feels that the same end result may happen with the cleaning materials.

Experimentation
Test Concept:
In order to determine weather the glaciation process affected the cleaning chemicals, it is first important to establish its potency prior to freezing. Accordingly, two test sets were created by the researcher. The purpose of the test was to determine how well the chemicals could break down household grease before and after the substances were frozen. The first test set would focus on unfrozen chemicals, while the second was set up for previously frozen chemicals.

The Test:
To start the experiment the researcher fried four pieces of bacon until there was enough grease in the skillet to perform the test. He then put a quarter teaspoon of the grease onto two nine by thirteen casserole dishes. Each casserole dish was set up for three frozen and three unfrozen chemical cleaners. A measured amount of cleaner (both frozen and unfrozen) was added to each spot of grease. After approximately two minutes of breaking down the grease, the dishes were raised to a uniform height at one end and the broken down grease was allowed to run. By measuring how far the grease ran, the researcher could then determine how much the cleaner broke down and therefore which cleaner was affected by the glaciation.

Resources
The resources for this experiment were acquired from the labels of the chemicals. Research was also done to try and find information about Chlorine in the Clorox Bleach but ended unsuccessfully. There was also research done to find out about the reason the 409 degreaser performed so poorly.




Conclusion
The researcher has concluded that the previously frozen chemicals performed just as well if not better than the unfrozen chemicals. See charts one and two for details o

DNA What is it

DNA



Deoxyribonucleic acid and ribonucleic acid are two chemical substances involved in transmitting genetic information from parent to offspring. It was known early into the 20th century that chromosomes, the genetic material of cells, contained DNA. In 1944, Oswald T. Avery, Colin M. MacLeod, and Maclyn McCarty concluded that DNA was the basic genetic component of chromosomes. Later, RNA would be proven to regulate protein synthesis. (Miller, 139)

DNA is the genetic material found in most viruses and in all cellular organisms. Some viruses do not have DNA, but contain RNA instead. Depending on the organism, most DNA is found within a single chromosome like bacteria, or in several chromosomes like most other living things. (Heath, 110) DNA can also be found outside of chromosomes. It can be found in cell organelles such as plasmids in bacteria, also in chloroplasts in plants, and mitochondria in plants and animals.

All DNA molecules contain a set of linked units called nucleotides. Each nucleotide is composed of three things. The first is a sugar called deoxyribose. Attached to one end of the sugar is a phosphate group, and at the other is one of several nitrogenous bases. DNA contains four nitrogenous bases. The first two, adenine and guanine, are double-ringed purine compounds. The others, cytosine and thymine, are single-ringed pyrimidine compounds. (Miller, 141) Four types of DNA nucleotides can be formed, depending on which nitrogenous base is involved.

The phosphate group of each nucleotide bonds with a carbon from the deoxyribose. This forms what is called a polynucleotide chain. James D. Watson and Francis Crick proved that most DNA consists of two polynucleotide chains that are twisted together into a coil, forming a double helix. Watson and Crick also discovered that in a double helix, the pairing between bases of the two chains is highly specific. Adenine is always linked to thymine by two hydrogen bonds, and guanine is always linked to cytosine by three hydrogen bonds. This is known as base pairing. (Miller, 143)

The DNA of an organism provides two main functions. The first function is to provide for protein synthesis, allowing growth and development of the organism. The second function is to give all of it's descendants it's own protein-synthesizing information by replicating itself and providing each offspring with a copy. The information within the bases of DNA is called the genetic code. This specifies the sequence of amino acids in a protein. (Grolier Encyclopedia, 1992) DNA does not act directly in the process of protein synthesis because it does not leave the nucleus, so a special ribonucleic acid is used as a messenger (mRNA). The mRNA carries the genetic information from the DNA in the nucleus out to the ribosomes in the cytoplasm during transcription. (Miller, 76)

This leads to the topic of replication. When DNA replicates, the two strands of the double helix separate from one another. While the strands separate, each nitrogenous base on each strand attracts it's own complement, which as mentioned earlier, attaches with hydrogen bonds. As the bases are bonded an enzyme called DNA polymerase combines the phosphate of one nucleotide to the deoxyribose of the opposite nucleotide.

This forms a new polynucleotide chain. The new DNA strand stays attached to the old one through the hydrogen bonds, and together they form a new DNA double helix molecule. (Heath, 119) (Miller, 144-145)

As mentioned before, DNA molecules are involved in a process called protein synthesis. Without RNA, this process could not be completed. RNA is the genetic material of some viruses. RNA molecules are like DNA. They have a long chain of macromolecules made up of nucleotides. Each RNA nucleotide is also made up of three basic parts. There is a sugar called ribose, and at one end of the sugar is the phosphate group, and at the other end is one of several nitrogenous bases. There are four main nitrogenous bases found in RNA. There are the double-ringed purine compounds adenine and guanine, and there is the single-ringed pyrimidine compounds of uracil and cytosine. (Miller, 146)

RNA replication is much like that of DNA's. In RNA synthesis, the molecule being copied is one of the two strands of a DNA molecule. So, the molecule being created is different from the molecule being copied. This is known as transcription. Transcription can be described as a process where information is transferred from DNA to RNA. All of this must happen so that messenger RNA can be created, the actual DNA cannot leave the nucleus. (Grolier Encyclopedia, 1992)

For transcription to take place, the RNA polymerase enzyme is needed first separate the two strands of the double helix, and then create an mRNA strand, the messenger. The newly formed mRNA will be a duplicate of one of the original two strands. This is assured through base pairing. (Miller, 147)

When information is given from DNA to RNA, it comes coded. The origin of the code is directly related to the way the four nitrogenous bases are arranged in the DNA. It is important that DNA and RNA control protein synthesis. Proteins control both the cell's movement and it's structure. Proteins also direct production of lipids, carbohydrates, and nucleotides. DNA and RNA do not actually produce these proteins, but tell the cell what to make. (Heath, 111-113)

For a cell to build a protein according to the DNA's request, a mRNA must first reach a ribosome. After this has occurred, translation can begin to take place. Chains of amino acids are constructed according to the information which has been carried by the mRNA. The ribosomes are able to translate the mRNA's information into a specific protein. (Heath, 116) This process is also dependent on another type of RNA called transfer RNA (tRNA). Cytoplasm contains all amino acids needed for protein construction. The tRNA must bring the correct amino acids to the mRNA so they can be aligned in the right order by the ribosomes. (Heath, 116) For protein synthesis to begin, the two parts of a ribosome must secure itself to a mRNA molecule. (Miller, 151)

Methods and Materials:



For the first part of the lab, colored paper clips were needed to construct two DNA strands. Each color paper clip represented one of the four nitrogenous bases. Black was used as adenine, white was thymine, blue was cytosine, and yellow represented guanine. A short sequence of the human gene that controls the body's growth hormone was then constructed using ten paper clips. The complementary strand of the gene was then made using ten more clips. The two model strands were laid side by side to show how the bases would bond with each other. The model molecule was then opened and more nucleotides were added to show what happens during replication.

For the second part of the lab, models of DNA, mRNA, tRNA, and amino acids were used to simulate transcription, translation, and protein synthesis. The model molecules were cut out with scissors and placed on the table. The DNA and mRNA molecules were put on the left side of the table, the others on the right. To simulate transcription, the mRNA molecule was slid down the DNA strand until the nucleotides matched. The mRNA molecule was then moved from the left side of the table to the right, showing it's movement from the nucleus to the cytoplasm. The tRNA molecules were then matched up with an amino acid. Once matched up, they were slid along the mRNA until their nucleotides matched.



Conclusions:



The most surprising discovery made was finding out that there are only four main bases needed in a DNA and RNA molecule. Also, each of these bases will only bond with one other base. It is important to realize how DNA greatly affects a cell's functions, in growth, movement, protein building, and many other duties. DNA is not nearly complex in structure as I had thought either. Containing only it's three main parts of a sugar, phosphate, and of course it's base. From these studies it is easy to see how DNA and RNA greatly affect the life and functions of an organism.



Bibliography:





Emmel, Thomas C. Biology Today. Chicago: Holt, Rinehart and Winston, 1991.



Foresman, Scott. Biology. Oakland, New Jersey: Scott Foresman and Company, 1988.



Hole, John W., Jr. Essentials. Dubuque, Iowa: Wm. C. Brown Company Publishers, 1983.



Mader, Sylvia S. Inquiry Into Life. New York: Wm. C. Brown Company Publishers, 1988.



McLaren, Rotundo. Heath Biology. New York: Heath Publishing, 1987.



Miller, Kenneth R. Biology. New Jersey: Prentice Hall, 1993.



Welch, Claude A. Biological Science. Boston: Houghton Mifflin Company, 1968.

discovery of the electron

The Discovery Of The Electron

The electron was discovered in 1895 by J.J. Thomson in the
form of cathode rays, and was the first elementary particle to be
identified. The electron is the lightest known particle which
possesses an electric charge. Its rest mass is Me <approximately
equal> 9.1 x 10 -28 g, about 1/1836 of the mass of the proton or
neutron.

The charge of the electron is -e = -4.8 x 10^-10 esu <elec
trostatic unit). The sign of the electron's charge is negative by
convention, and that of the equally charged proton is positive.
This is somewhat a unfortunate convention, because the flow of
electrons in a conductor is opposite to the conventional direc
tion of the current.

The most accurate direct measurement of e is the oil drop
experiment conducted by R.A. Milikan in 1909. In this experiment,
the charges of droplets of oil in air are measured by finding the
electric field which balances each drop against its weight. The
weight of each drop is determined by observing its rate of free
fall through the air, and using Stokes' formula for the viscous
drag on a slowly moving sphere. The charges thus measured are
integral multiples of e.

Electrons are emitted in radioactivity <as beta rays> and in
many other decay processes. The electron itself is completely
stable. Electrons contribute the bulk to ordinary matter; the
volume of an atom is nearly all occupied by the cloud of elec
trons surrounding the nucleus, which occupies only about 10^-13
of the atom's volume. The chemical properties of ordinary matter are
determined by the electron cloud.

The electron obeys the Fermi-Dirac statistics, and for this
reason is often called a fermion. One of the primary attributes
of matter, impenetrability, results from the fact that the elec
tron, being a fermion, obeys the Pauli exclusion principle.

The electron is the lightest of a family of elementary
particles, the leptons. The other known charged leptons are the
muon and the tau. These three particles differ only in mass;
they have the same spin, charge, strong interactions, and weak
interactions. In a weak interaction a charged lepton is either
unchanged or changed into and uncharged lepton, that is a neutri
no. In the latter case, each charged lepton is seen to change
only into the corresponding neutrino.

The electron has magnetic properties by virtue of (1) its
orbital motion about the nucleus of its parent atom and (2) its
rotation about its own axis. The magnetic properties are best
described through the magnetic dipole movement associated with 1
and 2. The classical analog of the orbital magnetic dipole moment
of a small current-carrying circuit. The electron spin magnetic
dipole moment may be thought of as arising from the circulation
of charge, that is, a current, about the electron axis; but a
classical analog to this moment has much less meaning than that
to the orbital magnetic dipole moment. The magnetic moments of
the electrons in the atoms that make up a solid give rise to the
bulk magnetism of the solid.

Diet and Cancer What is the Link

Diet and Cancer... What is the Link?

Today we know that too much of a certain type of foods can have
harmful effects on our health and well-being and we are learning that
diseases such as cancer are caused in part by our dietary choices.
In the 1950's scientists discovered relationship between diet and
coronary heart disease, the nations number one killer. In the last 15 year a
link between cancer and diet has been discovered by scientists.
The National Academy of Sciences (NAS), an organization of the
nation's foremost scientists found evidence so persuasive that in their
landmark report Diet, Nutrition and Cancer of 1982 they insisted Americans
to begin changing their diets to reduce their risk of developing cancer. The
results of the study were supported by later research done by NAS, the
Surgeon General, Department of Agriculture and Health and Human
Services, and the National Institute of Health.
Based mainly on the study by NAS done in 1982, the American
Institute for Cancer Research (AICR) devised a guideline with four parts to
help lower people's risk of developing cancer. The guidelines have been
updated since then to reflect recent research on the link.

The AICR guidelines are:
1. Reduce the intake of total dietary fat to a level of no more than 30%
of total calories and, in particular, reduce the intake of saturated fat to less
than 10% of total calories.
2. Increase the consumption of fruits, vegetables and whole grains.
3. Consume salt-cured, salt-pickled and smoked foods only in
moderation.
4. Drink alcoholic beverages only in moderation, if at all.*

Most cancers start when the body is exposed to a carcinogen, a
cancer-causing substance that is found everywhere in our environment for
example in sunlight. When the body is exposed to this substance it can
usually destroy the carcinogen without malignant effects. If any of the
substance eludes the body's defense system it can alter a cell's genes to make
it become a cancerous cell.
Cancer doesn't suddenly appear, it develops through gradual stages of
which the initial stages can be reversed. The foods we eat can either increase
the rate of which these stages advance or help fight and prevent it from
spreading. Salt-cured and salt-pickled foods don't contain carcinogen
however they do contain another ingredient which is changed to carcinogens
while being digested. Smoked foods are a little different, they have the
carcinogen in them.
Fruits, vegetables, and whole grains should be eaten to help prevent
cancer or fight against cancer advancing through the body. Foods high in fat
which include marbled meats, baked goods such as cookies and pastries and
high-fat dairy products must be avoided. They help a cancer cell grow,
multiply and spread.
By following these guidelines will not guarantee that one will not get
cancer however, it will lower chances. Cancer is still somewhat of an
unknown disease but we do know that the foods one eats can have powerful
effects on the development of cancer. This is good news because it provides
people an opportunity to stop or prevent cancer.

Determining the Ratio of the Circumference to the Diameter of

Determining the Ratio of Circumference to Diameter of a Circle

In determining the ratio of the circumference to the diameter I began by
measuring the diameter of one of the si objects which contained circles, then
using a string, I wrapped the string around the circle and compared the length
of the string, which measured the circumference, to a meter stick. With this
method I measured all of the six circles. After I had this data, I went back and
rechecked the circumference with a tape measure, which allowed me to make
a more accurate measure of the objects circumferences by taking away some
of the error that mymethod of using a string created.
After I had the measurements I layed them out in a table. The objects that
I measured were a small flask, a large flask, a tray from a scale, a roll of tape,
a roll of paper towels, and a spraycan.
By dividing the circumference of the circle by the diameter I was able to
calculate the experimental ratio, and I knew that the accepted ratio was pi. Then
I put both ratios in the chart.
By subtracting the accepted ratio from the experimental you find the error.
Error is the deviation of the experimental ratio from the accepted ratio. After I
had the error I could go on to find the percentage error. The equation I used was,
error divided by the accepted ratio times 100. For example, if I took the error of
the experimental ratio for the paper towels, which was 0.12. I took that and
divided it by the accepted ratio giving me .03821651. Then I multiplied that by
100 giving me about 3.14. Using these steps I found the percentage error for all
of the objects measured.
The next step was to graph the results. I was able to do this very easily
with spreadsheet. I typed in all of my data and the computer gave me a nice
scatter block graph. I also made a graph by hand. I set up the scale by taking
the number of blocks up the side of my graph and dividing them by the number
of blocks across. I placed my points on my hand drawn graph. Once I did this
I drew a line of best representation because some of the points were off a little
bit due to error.

By looking at my graph I can tell that these numbers are directly
proportional to each other. In this lab it was a good way to learn about error
which is involved in such things as measurements, and also provided me with
a good reminder on how to construct graphs.
There were many errors in this lab. First off errors can be found in the
elasticity of the string or measuring tape. Second there are errors in the
measurements for everyone. Errors may be present when a person moves
their finger off of the marked spot on the measuring device.


Object Circumference Diameter
small flask 20.5 6.3
large flask 41.3 12.9
tray from a scale 40.1 9.5
roll of tape 6.4 1.2
roll of paper towels 44.5 11.8
spraycan 25.1 7.7

Determination of an unknown amino acid from a titration curve

Abstract

Experiment 11 used a titration curve to determine the identity of an unknown amino acid. The initial

pH of the solution was 1.96, and the pKa's found experimentally were 2.0, 4.0, and 9.85. The accepted

pKa values were found to be 2.10, 4.07, and 9.47. The molecular weight was calculated to be 176.3 while

the accepted value was found to be 183.5. The identity of the unknown amino acid was established to be

glutamic acid, hydrochloride.

Introduction

Amino acids are simple monomers which are strung together to form polymers (also called proteins).

These monomers are characterized by the general structure shown in figure 1.


Fig. 1



Although the general structure of all amino acids follows figure 1, the presence of a zwitterion is made
possible due to the basic properties of the NH2 group and the acidic properties of the COOH group. The amine group (NH2) is Lewis base because it has a lone electron pair which makes it susceptible to a coordinate covalent bond with a hydrogen ion. Also, the carboxylic group is a Lewis acidic because it is able to donate a hydrogen ion (Kotz et al., 1996). Other forms of amino acids also exist. Amino acids may exists as acidic or basic salts. For example, if the glycine reacted with HCl, the resulting amino acid would be glycine hydrochloride (see fig. 2). Glycine hydrochloride is an example of an acidic salt form of the amino acid. Likewise, if NaOH were added, the resulting amino acid would be sodium glycinate (see fig. 3), an example of a basic salt form.


Fig. 2






Fig. 3






Due to the nature of amino acids, a titration curve can be employed to identify an unknown amino acid.

A titration curve is the plot of the pH versus the volume of titrant used. In the case of amino acids, the

titrant will be both an acid and a base. The acid is a useful tool because it is able to add a proton to the

amine group (see fig. 1). Likewise the base allows for removal of the proton from the carboxyl group by

the addition of hydroxide. The addition of the strong acid or base does not necessarily yield a drastic

jump in pH. The acid or base added is unable to contribute to the pH of the solution because the protons

and hydroxide ions donated in solution are busy adding protons to the amine group and removing protons

from the carboxyl group, respectively. However, near the equivalence point the pH of the solution may increase or decrease drastically with the addition of only a fraction of a mL of titrant. This is due to the fact that at the equivalence point the number of moles of titrant equals the number of moles of acid or base originally present (dependent on if the amino acid is in an acidic or basic salt form). Another point of interest on a titration curve is the half-equivalence point. The half-equivalence point corresponds to the point in which the concentration of weak acid is equal to the concentration of its conjugate base. The region near the half-equivalence point also establishes a buffer region (Jicha, et al., 1991). (see figure 4).



Fig. 4





The half-equivalence point easily allows for the finding of the pKa values of an amino acid. A set

pKa values can be extremely helpful in identifying an amino acid. Through a manipulation of the

Henderson-Hasselbalch equation, the pH at the half-equivalence point equals the pKa. This is reasoned

because at the half-equivalence point the concentration of the conjugate base and the acid are equal.

Therefore the pH equals the pKa at the half-equivalence point (see figure 5.)



Fig. 5 [base]
pKa= pH - log -------
[acid]

[base]
log -------- = log 1 = 0
[acid]

therefore, pH = pKa



However, many substances characteristically have more than one pKa value. For each value, the

molecule is able to give up a proton or accept a proton. For example H3PO4 has three pKa values. This is

due to the fact that it is able to donate three protons while in solution. However, it is much more difficult

to remove the second proton than the first. This is due to the fact that it is more difficult to remove a

proton from a anion. Furthermore, the more negative the anion, the more difficult to remove the proton.

The trapezoidal method can be employed to find the equivalence points as shown if figure 6. The

volume of titrant between two equivalence points is helpful in the determination of the molecular weight

of the amino acid.



Fig. 6




The purpose of experiment 11 is to determine the identity of an unknown amino acid by analyzing a

titration curve. The experiment should lend the idea that the following may be directly or indirectly

deduced from the curve-- the equivalence and half equivalence points, pKa values, the molecular weight

and the identity of the unknown amino acid.

Experimental


The pH meter was calibrated and 1.631 grams (.0089 moles) of the unknown amino acid was weighed and placed in a 250-mL volumetric flask. About 100 mL of distilled water was added to dissolve the solid. The flask was gently swirled and inverted to insure a complete dissolution of the solid. The solution was diluted with distilled water to the volume mark on the flask. Then, one buret was filled with 0.100 M HCl stock solution and another buret was filled with 0.100 M NaOH. A pipet was used to add 25.00 mL of the unknown amino acid solution to a 100-mL beaker. The solution's initial pH was established to be 1.96 by the pH meter. The electrode was left in 100-mL beaker with the unknown amino acid solution. In the accurate titration curve, the acid was added in 0.5 mL increments until the pH of the solution was 1.83. As the titrant was added the pH of the solution was recorded on a data sheet. Also, a graph of pH versus the mL of titrant added was plotted. After the addition of the acid, a new 25 mL aliquot of unknown solution was added to a clean 100-mL beaker. The base was then used to titrate the solution. It was added in 0.20 to 1.0 mL increments depending on the nature of the curve. (The nature of the curve was somewhat expected because previously an experimental titration curve was established. This curve used increments of up to 2.0 mL.) The base was added until the pH reached 12.03.
Results

Table 1 shows the pH endpoints for both the titration with the acid as well as with the base. It also shows the initial pH. Table 1 also shows the experimentally determined and accepted molecular weight and pKa values for the glutamic acid, hydrochloride. Tables 2 and 3 show the amounts of base and acid added to the unknown solution (respectively) and the pH which corresponds to that amount. Figures 7 and 8 represent the exploratory titration and the accurate titration curves (respectively). Figure 9 represents the structure of the unknown amino acid, glutamic acid, hydrochloride.
Table 1
pH of endpoints pKa values (experimental) pKa values (accepted) initial pH Molecular weight identity of unknown
1.83 2.0 2.10 1.96 176.3 (expt.) Glutamic acid, hydro-chloride
12.03 4.0 4.07 183.5 (accepted)
9.85 9.47

Table 2
Accurate Titration for NaOH Fig.9
total mL of 0.10 M NaOH pH of solution
0.00 1.96
1.0 2.05
3.0 2.26
5.0 2.5
7.0 2.84
9.0 3.28
10.0 3.53
11.0 3.77
13.0 4.14
14.0 4.39
15.0 4.56
15.5 4.66
16.0 4.78
16.5 4.93
17.0 5.13
17.5 5.63
17.7 5.99
17.8 6.52
18.0 7.93
18.2 8.18
18.4 8.50
18.5 8.56
19.0 8.83
21.0 9.44
22.0 9.62
23.0 9.82
23.5 9.93
24.0 9.98
24.5 10.12
25.0 10.21
25.5 10.37
26.0 10.52
26.5 10.69
27.0 10.86
27.5 11.06
28.0 11.22
28.5 11.37
29.0 11.41
29.5 11.53
30.0 11.58
31.0 11.71
33.0 11.85
36.0 12.03







Table 3
Accurate Titration for HCl
total mL of 0.10 M HCl pH of solution
0.00 1.96
0.5 1.93
1.0 1.91
1.5 1.87
2.0 1.85
2.5 1.83


Discussion

The initial pH of the unknown solution was 1.96. This information was helpful in determining the identity of the unknown amino acid because only a three of the nine unknowns were acidic salts. (Acidic salt forms of amino acids are capable of having pH values of this degree.) However, more information was required before the determination could be conclusive. The unknown produced three equivalence points and therefore, three pKa values. Therefore, one of the three remaining amino acids one could be omitted from the uncertainty, because it contained only two pKa values. However, after examining the pKa values of the unknown, it was apparent that they were remarkably similar to those of glutamic acid, hydrochloride. The unknown's pKa values were 2.0, 4.0, and 9.85, while the glutamic acid's pKa values were 2.10, 4.07, 9.47. At this point, the identity of the amino acid was conclusive. However, as a precautionary measure, the molecular weight of the amino acid was calculated and found to be 176.3 amu. The calculated value corresponds well with the known value of 183.5 amu.
There are a few errors that can be held accountable for the small deviation from the accepted values.
First, the pH meters never reported a definite value; most times the meter would report a floating number. Therefore, one have no way of knowing which reported pH was more correct. Also, the method by which the equivalence points was extremely crude. It called for a series of rough of estimations. These estimations then led to the equivalence point. Then the use of the equivalence point was used to determine the half-equivalence point. This point was then used to find the pKa. The deviance from accepted values of the pKa values occur because of the compounded series of crude estimates which were required. Likewise, the deviance of the calculated molecular weight can be attributed to these crude vehicles, because the change in volume (between equivalence points) were used in calculation.

Conclusion

The identity of an unknown amino acid was determined by establishing a titration curve. The

equivalence and half-equivalence point, the pKa values, and the molecular weight were directly or

indirectly found through the titration curve. The equivalence points were found through a crude method

known as the trapezoidal method. The establishment of the equivalence points gave rise to the half

equivalence points and the D volume (used in calculating the molecular weight). The half-equivalence

points were directly used to find the pKa values of the unknown. The molecular weight could also be

calculated. This data led to the determination of the identity of the unknown amino acid--glutamic acid,

hydrochloride.



References

Jicha, D.; Hasset, K. Experiments in General Chemistry; Hunt: Dubuque, 1991:37-53.

Kotz, J.C.: Treichel , P. Jr. Chemistry and Chemical Reactivity; Harcourt-Brace: Fort Worth, 1996; 816- 837.

Design of Structures in respect to heat efficiency

OUTLINE


Introduction
Problem
What materials are better for insulation?
What designs are better for insulation?
Purpose
Background
Organizations Researching Problem
Materials
lustrous
dull
dark
light
Design
Windows
Enclosed
Hypothesis
Materials
Procedure
Summary
Materials that Work Best in Heat Efficiency
Designs that Work Best in Heat Efficiency
References













Introduction

Heat efficiency in any architectural design is always a topic that must be addressed. Without this key element, structures would be totally inefficient to heat, not to mention extremely expensive. In order to design a heat efficient building you must first understand where heat is lost or where cold air enters the structure in question. My research will first be to determine what materials are best for insulation and which materials are not. Second, I will try to find where heat is most likely to escape in a structure by researching efficient designs. This, in turn, will provide information to where it is necessary to add more insulation to a particular structure.


Background

It has been proven time and time again that solar energy plays a crucial part in the heating of any structure regardless of its design. The intensity of solar energy is almost an exact constant only varying in energy about 0.2 % every 30 years. This intensity on average is about 1.37 ( 106 ergs per second per cm2, or about 2 calories per minute per cm2. This intensity can vary of course when the solar photons interact with different conditions in the atmosphere. This energy from the sun can be converted so that it is able to heat a structure in many different ways. During my experiment though, I will only be testing the effects of a structure's heat related to passive solar energy as illustrated in figure 1. Passive Solar energy is where the sun's heat is able to heat a structure without the use of specialized equipment such as a photovoltaic cell or other direct solar energy device.
Many organizations in such countries as Australia and England are conducting nationwide heat energy efficiency ratings that can be used as references for engineers and architects. These ratings could inform a designer as to what designs work better and which do not. The program in Australia is titled the "Nationwide House Energy Rating Scheme" (NatHERS) and became available to all designers who wished to use it early in 1995. A parallel program to the NatHERS is New Zealand's "Window Energy Rating Scheme" (WERS) which allows homeowners to make better decisions about the selection and design of window systems from an energy perspective. The WERS rating system will not be available however until late 1996. Great Britain and many other nations have just recently begun conducting their own Energy Efficiency Rating systems that will not become available to the pubic until the early 21st century. So far though each of the research organizations has been making their own discoveries that have already begun to effect architectural designs of structures.
Often a structure's ability to collect heat is directly associated with the materials it was built with. Depending on the material itself, it can either hinder or help the structure's ability to collect heat. A lustrous material such as a mirror for example, would reflect light but retain the heated photons. This effect would heat the structure extremely well because of the lustrous surface's ability to attract light towards it and collecting its heat. A dull material like natural wood, has proven not to attract much light nor to collect a substantial amount of heat. Plain wood would not be a wise decision to use if the material used in the structure was going to be how the structure was heated. Most often structures are built with internal systems that produce heat. The color of the material used to build a structure is also a key element. For the most part the darker the color the more heat it attracts and the more heat it can store. A structure that is entirely black in color will be far more easy to heat than one of any other color. Exactly opposite to dark colors are light colors that do not attract much heat at all and are not efficient at storing heat. The best combination is most often a dark, lustrous material if heat is the desired effect. A completely wrong choice would be materials that are dull and light in color unless cooling was the purpose.
The design of a structure can contain an infinite number of different elements each either helping to make the structure efficient or hindering its ability. In my experimentation I am only going to focus on two design elements. These two elements pertain to if the structure has windows or if it is enclosed. The false-color image in figure 2 shows heat eminating from a house in the form of infrared radiation. The black regions radiate away the least amount of heat, while the white regions, which coincide with the house's windows, radiate away the most heat (NASA,1991). Because of the fact that solar energy can not be collected in any structure during the night, all of my experimentation will be conducted around noon in order to create a constant. Figure 2 shows heat escaping mainly through the windows but does not show that during the day windows are the most significant passive heat intakes for a structure. Windows are however, a disadvantage if the structure is being placed in a highly shaded area such as a forest. In this case heat would have to be contracted in some other way. A greenhouse is probably one of the best examples of passive heating. Without the aid of any other device, greenhouses are able to maintain a high temperature. As stated above, windows are also responsible for most heat loss in a directly heated home. In fact windows account for 41% of heat loss in a typical US home. Double-pane windows are one way to decrease heat loss from a structure but do not solve the problem entirely. If the structure is to be built in an area were lack of sunlight is not a problem then a structure with many windows should not be a problem for heat loss. An enclosed structure with no windows in theory should cut down on at least 41% of heat loss. This would also cut down on a large amount of heat gained during the day by passive energy through windows. That would then cause heating efficiency to decrease. Even though the efficiency would decrease it would probably still be less than 41% making it the better choice. Many designers do not choose to do this however because of the lack of view not having any windows would cause. Also windows serve as decoration in many designs. Research has shown that the best compromise is to have double-pane windows evenly placed throughout the structure in order to prevent one particular area from becoming too cold or too hot. Insulation in the walls, roof and floor is also a compromise. Too little insulation allows an excessive amount of heat escape from a structure while too much insulation allows almost no heat to enter. Most structures are directly heated from the inside allowing more insulation to be applicable.
A combination of the right materials and correct design according to where the structure is to be placed is crucial. If a structure built in a cold, cloudy, climate was to be made purely of windows and white wood, the temperature inside the structure would be close to the temperature outside. Structural design must include many various factors. Only the three factors of luster, color and windows will be used in my experiment.



























Hypothesis & Experiment

If different materials are used to build a scaled structure, then the structure with high luster will have a higher temperature than the structure with low luster.
If different materials are used to build a scaled structure, then the structure with a darker color will have a higher temperature then the structure with a light color.
If different materials are used to build a scaled structure, then the structure with an enclosed design (no transparent areas) will have a higher temperature than the structure with a transparent design.


MATERIALS:
1 sheet of standard sheet metal
1 sheet of brown box cardboard
2 sheets of black plexi-glass
1 sheet of white plexi-glass
1 sheet of clear plexi-glass
2 thermometers
1 stopwatch or alarm clock
1 jigsaw
1 roll of duct tape


PROCEDURE:
Using the jigsaw, cut the sheet metal into 5 5"(5" squares. Then do the same with the cardboard.
Using the duct tape, secure the 5 squares or sheet metal together forming a cube with 1 side missing. Then do the same for the cardboard.
Place the 2 semi-cubes out side at about 11:00 a.m. with the missing side facing down. Place the thermometers inside the semi-cubes and use the stopwatch or alarm clock to time 2 hours.
At about 1:00 p.m. check the thermometers and record the two temperatures in a data table.
Using the 1st black sheet and the white sheet of plexi-glass, follow steps 1-4 the next day.
Using the 2nd black sheet and the clear sheet of plexi-glass, follow steps 1-4 the 3rd day.






SUMMARY

The experiment in this paper will probably support my hypothesis based on the research collected. The NatHERS rating organization expressed that lustrous materials are much more likely to collect and store heat better than dull materials. They also express that darker colored materials will more often than not collect heat at a higher rate that that of a color such as white. In addition to these theories SOLARCH (National Solar Architecture Research Unit) has studied window advantages alongside WERS to support my own theory that enclosed structures store more heat than transparent structures. Further studies on my part could branch into the other areas of structural design and placement providing a more in detailed plan for experimentation.


REFERENCES

"The Integration of Window Labeling in the Nationwide House Energy Rating Scheme (NatHERS) for Australia".
John Ballinger, Deborah Cassell, Deo Prasad, Peter Lyons,.
SOLARCH- National Solar Architecture Research Unit
The University of South Wales
Sydney 2052 Australia
Internet

Behrman, Daniel. "Solar Energy: the awaking science",
Little, 1980

Butti, Ken and Perlin, John. A Golden Thread
Van Nostrard, 1980. "2500 Years of Solar Architecture and Technology"
Titlepage







1

Decomposition

Decomposition
12/09/96

Purpose:
In this lab we will observe the products of decomposition of potassium perchlorate (KClO4). We will then predict from our results the correct chemical reaction equation.

Procedure:
1. Weigh out about 4.0g of KClO4 in a test tube. Record the accurate weight below.
Product Weight Before Weight After
Mass of Test tube + KClO4 41.5g 39.8g
Mass of Test tube 37.5g 37.5
Mass of KClO4 4.0g 2.3g
2. Set up the apparatus shown below.

3. Gently heat the test tube containing the potassium perchlorate. Gas should begin to collect in the collection bottle. Record all observation.
4. Once the reaction is complete, no more gas give off, allow the test tube to cool. While the test tube is cooling test the gas in the collection bottle with glowing splint.
Caution: Do not leave the rubber tubing down in the water trough during cooling or you will experience back-up.
5. After the test tube has cooled weigh it on a balance. What is the change in mass?

Observations:
Oxygen flowed from the test tube into the bottle of water, forcing the water out.
Burning ember re-ignited when placed into the bottle of O2.

Calculations:
1. The number of moles of KClO4 that we began with is .03 moles. 4.0g ¸ 138.6g/mol = .03 moles
2. The number of moles of O2 that were present in our sample of KClO4 was .06 moles. 1.9g ¸ 32g/mole = .06 moles
3. The number of moles of O2 lost is .02 moles. 1.7g ¸ 32g/mol = .05 moles
4. KClO4 à KCl + 202
4.0g ¸ 138.6g/mol = .03 moles ´ 202 ¸ KClO4 = .06 moles ´ 32g = 1.9g
5 Percent Yield: 89% O2 lost 1.7g ¸ O2 Expected 1.9g

CostChem Analysis

Performance:
- Loading: Manual
- Precision: None
- Speed: Dependence upon user.

Environment:
- Temperature Range - Appropriate user working environment.
- Pressure Range - Appropriate user working environment
- Humidity - Appropriate user working environment
- Shock Loading: High resistance due to cast-iron material.
- Dirty - The (screw & handle) must be properly lubricated
- Corrosion - Rust.
- Noise Level - Dependant upon user.
- Insects - Not applicable.
- Vibration: Dependence upon usage (cutting, drilling, pounding).
- Person type: Average individual

Service Life:
Service life of the bench vise should be quite long, due to the durable product at hand. Product life-span should be a minimum of 5 years with a noticeable turnover rate afterwards.

Maintenance:
The product is maintenance free with the only alternative of purchasing a new item for broken parts or a return of defective parts via warranty. This provides a more economical solution for the end user in comparison to the purchase of a new product. This type of resolution is not easily completed and must be done by the end user. Profitability on this basis would be at a bare minimum with only the customer's satisfaction in mind.

Target Costs:
Original cost of the bench vise is $14.99 retail and $8.00 manufacturing cost. Initial startup costs would include machines for producing casted parts, grinding and polishing the anvil, and painting the parts.

Competition:
The competition has comparable initial products available, but after our redesigning process our product will be of higher quality. NOTE: We will provide detailed reports of other products for the future presentation.

Shipping:
In bulk, by land and sea, directly to the companies warehouse in enormous shipping quantities. Distribution will be handled through a exterior company, this will allow us to not only focus on our product only, but (most likely) reduce our overall overhead due to our manufacturing of only one product.

Product Volume (Quantity):
Projected annual sales: 10,000,000 products (worldwide dist.)
Method of construction: Casting, then minimal manual assembly.
Retooling efforts would be very minimal with only the changing of the dies for each part within the product.

Packing:
Placement into a sturdy box capable of handling the weight of the product. Due to the resistance of the product packaging can be at this minimum level.

Manufacturing Facility:
We are a startup company with very minimal, if any, current production levels. We will need to build a complete manufacturing line for our new product.




Size:
Various sizes of the bench vise will be available to apply to varying situations and needs (too large for design constraints, work environment, etc.). Due to the work environment nature of the product portability will be held to a minimum.

Weight:
The weight factor will be tightly integrated into the design constraints related to clamping / external forces. Shipping costs will be, on the whole, not affected largely due to the enormous quantities shipped per a delivery.

Aesthetics and Finish:
The current product has a more "square" appearance, we would like to evolve the existing competitor product so that it has more round features. This will give it the "new age" appearance.

Materials:
None.

Product Life Span:
Product lifespan should be a minimum of 5 years with a noticeable turnover rate afterwards.

Standards, Specifications, and Legal Aspects:
We will tailor current acceptable design standards to meet our product specifications. Current legal ramifications of design standards and of our additional ones should not result in high liability suits.

Ergonomics:
The ergonomic factor of this product will be primarily handled by the end- user (workplace attributes of noise level, open area, reachability, etc.). We will take into account the factors of applied force and leverage to result in a acceptable clamping force.

Customer:
The customer may have several current ties with certain product lines (Sears Craftsman products, etc). We will have to overcome this disadvantage by attaining a higher level of quality over our competitors.

Quality & Reliability:
Due to the quick production scheme required a larger amount of defective products will be produced, but as long as we keep tolerance levels low and continue with spc production levels should be quite high.

Shelf Life:
For our sake, unlimited (Note: item is corrosion resistant).

Processes:
None.

Timescales:
An acceptable design startup --> launch date would be approx. 6 months. This would include assembly line setup and implementation of new design constraints. One note, the initial time spent on design and setup should be taken very seriously as this will lead to a savings of time / money in the future due to no large re-designing of the process / parts. SPC should be taken into account at this stage.

Testing:
Testing should be completed by the end user. This is going to give us the widest range of environmental settings and a variance of customer complaints / compliments. The testing of the product will allow us to continue with the above SPC process.

Safety:
1. Properly secured product.
2. Keep fingers away from clamping jaws.
3. High stress / force factors may be involved with use.

Company Constraints
Startup project, the facilities / personnel are added as needed.

Market Constraints:
Comparative prices / functionality with competitive products.

Patents, Literature, and Product Data:
[ Research needed ]

Political and Social Implications:
None known.

Disposal:
Re-melting of product parts when use is no longer desired or available will provide and afterlife.

Copper

Blake Adams Period 3
Grade 8 2/5/97




Copper Report




Copper is a mineral. it is not a plant or a animal. Copper is a
metallic metal. It can never be broken down into differnet substances by
normal chemical means. Copper was one of the first metals known to humans.
People liked it because in it's native condition, it could easily be beaten into
weapons or tools. Copper has been one of the most useful metals for over
5000 years. Copper was probably used around 8000 B.C by people living
along the Tigris and Euphrates rivers. In 6000 B.C, Egyptians learned how to
hammer copper into things they wanted. Around 3500 B.C, People first
learned how to melt copper with tin to make bronze. So the period between
3000 B.C and 1100 B.C became known as the bronze age.
Today, some of the leading states of the copper industry
are Arizona with 747,000 short tons, Utah with 187,000 short tons, New
Mexico with 161,000 short tons. Some other leading countries are Chile with
1,422,000 short tons, United States with 1,203,000 short tons, Soviet Union
with 650,000 short tons, and Zambia with 596,000 short tons.
When copper is being mined, both Native copper and copper ore
are usually found. The highest grade of copper ore is pale silvery gray.
Miners used to be always in danger in copper mines. Today, we have reduced
a fair amount of these hazards. Miners wear hats made of iron or very hard
plastic. This is to protect them from falling rocks. Lamps are also attached to
these helmets in case some of the lighting in the mine goes out leaving a
miner stranded in the dark. One of the biggest problems with mining is that in
some places dangerous gas's may exist, like Carbon Monoxide. In the past
we had very cruel and inhuman ways to detect harmful gases. One of these
ways was the use of canaries. Miners would let them fly into a part of the
mine where a poison gas was suspected. If there was a harmful gas, the bird
would fall over dead at the first scent of the gas. Today, we have better ways
to detect gases without having animals die. We now have detection machines
in all parts of mines. Mines also have top of the line fire alarms and water
systems. If a flammable gas ignites, like sulfur, the fire may not die for years,
which results in closing the mine. Another problem miners complain about are
the rats. Mines will often have mine cats that hunt out the rats. These cats are
well fed and petted by most of the miners.
Most copper is found in seven ores. That means it's mixed
in with other metals like lead, zinc, gold, cobalt, bismuth, platinum, and
nickel. These ores will usually have only about 4% pure copper in them
though. Sometimes miners may only find 2%. The things that make copper
such a popular metal are malleability which is how easily it bends. Copper is
highly malleable and won't crack when hammered or stamped. Ductility is
also a good property and is the ability to be molded or bent into a shape.
Copper can be pulled into very thin wire. For example, if you took a copper
bar, 4 inches (10 centimeters) square, you could draw it into wire thinner then
a human hair. One of the most amazing things about copper is its resistence to
corrosion. Copper will not rust. However, when the air grows damp, copper
will go from reddish-orange to reddish brown. After being in damp air for
long periods of time, a green film will coat the copper, called patina, which
will protect it from further corrosion.
Since copper is one of the most widely used metals in the
world we use it for a lot of things. Copper gives us water heaters, boilers and
cooking utensils. It is used for out door power lines, cables, lamp cords, and
house wiring. Electrical machinery like generators, motors, controllers,
signaling devices, electro magnets, and communication devices all use
copper.

cool

Radioactive wastes, must for the protection of mankind be stored or disposed in such a manner that isolation from the biosphere is assured until they have decayed to innocuous levels. If this is not done, the world could face severe physical problems to living species living on this planet.
Some atoms can disintegrate spontaneously. As they do, they emit ionizing radiation. Atoms having this property are called radioactive. By far the greatest number of uses for radioactivity in Canada relate not to the fission, but to the decay of radioactive materials - radioisotopes. These are unstable atoms that emit energy for a period of time that varies with the isotope. During this active period, while the atoms are 'decaying' to a stable state their energies can be used according to the kind of energy they emit.
Since the mid 1900's radioactive wastes have been stored in different manners, but since several years new ways of disposing and storing these wastes have been developed so they may no longer be harmful. A very advantageous way of storing radioactive wastes is by a process called 'vitrification'.
Vitrification is a semi-continuous process that enables the following operations to be carried out with the same equipment: evaporation of the waste solution mixed with the
------------------------------------------------------------1) borosilicate: any of several salts derived from both boric acid and silicic acid and found in certain minerals such as tourmaline.

additives necesary for the production of borosilicate glass,
calcination and elaboration of the glass. These operations are
carried out in a metallic pot that is heated in an induction
furnace. The vitrification of one load of wastes comprises of the following stages. The first step is 'Feeding'. In this step the vitrification receives a constant flow of mixture of wastes and of additives until it is 80% full of calcine. The feeding rate and heating power are adjusted so that an aqueous phase of several litres is permanently maintained at the surface of the pot. The second step is the 'Calcination and glass evaporation'. In this step when the pot is practically full of calcine, the temperature is progressively increased up to 1100 to 1500 C and then is maintained for several hours so to allow the glass to elaborate. The third step is 'Glass casting'. The glass is cast in a special container. The heating of the output of the vitrification pot causes the glass plug to melt, thus allowing the glass to flow into containers which are then transferred into the storage. Although part of the waste is transformed into a solid product there is still treatment of gaseous and liquid wastes. The gases that escape from the pot during feeding and calcination are collected and sent to ruthenium filters, condensers and scrubbing columns. The ruthenium filters consist of a bed of
------------------------------------------------------------
2) condensacate: product of condensation.


glass pellets coated with ferrous oxide and maintained at a
temperature of 500 C. In the treatment of liquid wastes, the
condensates collected contain about 15% ruthenium. This is
then concentrated in an evaporator where nitric acid is destroyed by formaldehyde so as to maintain low acidity. The concentration is then neutralized and enters the vitrification pot.
Once the vitrification process is finished, the containers are stored in a storage pit. This pit has been designed so that the number of containers that may be stored is equivalent to nine years of production. Powerful ventilators provide air circulation to cool down glass.
The glass produced has the advantage of being stored as solid rather than liquid. The advantages of the solids are that they have almost complete insolubility, chemical inertias, absence of volatile products and good radiation resistance. The ruthenium that escapes is absorbed by a filter. The amount of ruthenium likely to be released into the environment is minimal.
Another method that is being used today to get rid of radioactive waste is the 'placement and self processing radioactive wastes in deep underground cavities'. This is the disposing of toxic wastes by incorporating them into molten silicate rock, with low permeability. By this method, liquid
wastes are injected into a deep underground cavity with mineral treatment and allowed to self-boil. The resulting

steam is processed at ground level and recycled in a closed system. When waste addition is terminated, the chimney is allowed to boil dry. The heat generated by the radioactive wastes then melts the surrounding rock, thus dissolving the wastes. When waste and water addition stop, the cavity temperature would rise to the melting point of the rock. As the molten rock mass increases in size, so does the surface area. This results in a higher rate of conductive heat loss to the surrounding rock. Concurrently the heat production rate of radioactivity diminishes because of decay. When the heat loss rate exceeds that of input, the molten rock will begin to cool and solidify. Finally the rock refreezes, trapping the radioactivity in an insoluble rock matrix deep underground. The heat surrounding the radioactivity would prevent the intrusion of ground water. After all, the steam and vapour are no longer released. The outlet hole would be sealed. To go a little deeper into this concept, the treatment of the wastes before injection is very important. To avoid breakdown of the rock that constitutes the formation, the acidity of he wastes has to be reduced. It has been established experimentally that pH values of 6.5 to 9.5 are the best for all receiving formations. With such a pH range, breakdown of the formation
rock and dissociation of the formation water are avoided. The stability of waste containing metal cations which become hydrolysed in acid can be guaranteed only by complexing agents which form 'water-soluble complexes' with cations in the

relevant pH range. The importance of complexing in the preparation of wastes increases because raising of the waste solution pH to neutrality, or slight alkalinity results in increased sorption by the formation rock of radioisotopes present in the form of free cations. The incorporation of such cations causes a pronounced change in their distribution between the liquid and solid phases and weakens the bonds between isotopes and formation rock. Now preparation of the
formation is as equally important. To reduce the possibility of chemical interaction between the waste and the formation, the waste is first flushed with acid solutions. This operation removes the principal minerals likely to become involved in exchange reactions and the soluble rock particles, thereby creating a porous zone capable of accommodating the waste. In this case the required acidity of the flushing solution is established experimentally, while the required amount of radial dispersion is determined using the formula:
R = Qt
2 mn
R is the waste dispersion radius (metres)
Q is the flow rate (m/day)
t is the solution pumping time (days)
m is the effective thickness of the formation (metres)
n is the effective porosity of the formation (%)

In this concept, the storage and processing are minimized. There is no surface storage of wastes required. The permanent binding of radioactive wastes in rock matrix gives assurance of its permanent elimination in the environment.

This is a method of disposal safe from the effects of earthquakes, floods or sabotages.
With the development of new ion exchangers and the advances made in ion technology, the field of application of these materials in waste treatment continues to grow. Decontamination factors achieved in ion exchange treatment of waste solutions vary with the type and composition of the waste stream, the radionuclides in the solution and the type of exchanger.
Waste solution to be processed by ion exchange should have a low suspended solids concentration, less than 4ppm, since this material will interfere with the process by coating the exchanger surface. Generally the waste solutions should contain less than 2500mg/l total solids. Most of the dissolved solids would be ionized and would compete with the radionuclides for the exchange sites. In the event where the waste can meet these specifications, two principal techniques are used: batch operation and column operation.
The batch operation consists of placing a given quantity
of waste solution and a predetermined amount of exchanger in a vessel, mixing them well and permitting them to stay in contact until equilibrium is reached. The solution is then filtered. The extent of the exchange is limited by the selectivity of the resin. Therefore, unless the selectivity for the radioactive ion is very favourable, the efficiency of
removal will be low.

Column application is essentially a large number of batch operations in series. Column operations become more practical. In many waste solutions, the radioactive ions are cations and a single column or series of columns of cation exchanger will provide decontamination. High capacity organic resins are often used because of their good flow rate and rapid rate of exchange.
Monobed or mixed bed columns contain cation and anion exchangers in the same vessel. Synthetic organic resins, of the strong acid and strong base type are usually used. During operation of mixed bed columns, cation and anion exchangers are mixed to ensure that the acis formed after contact with the H-form cation resins immediately neutralized by the OH-form anion resin. The monobed or mixed bed systems are normally more economical to process waste solutions.
Against background of growing concern over the exposure of the population or any portion of it to any level of
radiation, however small, the methods which have been successfully used in the past to dispose of radioactive wastes must be reexamined. There are two commonly used methods, the storage of highly active liquid wastes and the disposal of low activity liquid wastes to a natural environment: sea, river or ground. In the case of the storage of highly active wastes, no absolute guarantee can ever be given. This is because of a possible vessel deterioration or catastrophe which would cause a release of radioactivity. The only alternative to dilution

and dispersion is that of concentration and storage. This is implied for the low activity wastes disposed into the environment. The alternative may be to evaporate off the bulk of the waste to obtain a small concentrated volume. The aim is to develop more efficient types of evaporators. At the same time the decontamination factors obtained in evaporation must be high to ensure that the activity of the condensate is negligible, though there remains the problem of accidental dispersion. Much effort is current in many countries on the establishment of the ultimate disposal methods. These are defined to those who fix the fission product activity in a non-leakable solid state, so that the general dispersion can never occur. The most promising outlines in the near future are; 'the absorbtion of montmorillonite clay' which is comprised of natural clays that have a good capacity for chemical exchange of cations and can store radioactive wastes, 'fused salt calcination' which will neutralize the wastes and 'high temperature processing'. Even though man has made many breakthroughs in the processing, storage and disintegration of radioactive wastes, there is still much work ahead to render the wastes absolutely harmless.