Background: Nursing education is a dynamic process designed to enable nurses to competently meet the healthcare needs of society. Health system restructuring has been associated with diminishing postgraduate specialist nursing numbers worldwide. Valid instruments that monitor and evaluate nurses’ attitudes to gauge educational barriers and facilitators are a central component in planning effective education, and have been unavailable. We previously developed the Registered Nurses Attitudes Towards Post Graduate Education (NATPGE) survey-instrument to fill this gap, in phase one of the study. Objective: To test the validity and reliability of the Registered Nurses Attitudes Towards Post Graduate Education (NATPGE) survey-instrument. Methods: I. Content Validity Index (CVI)1 based on expert ratings of relevance was used as a method of quantifying content validity for the NATPGE survey-instrument. Four categories: relevance, simplicity, clarity and non-ambiguity were used to enable the appraisal of the tool’s phrasing and terminology as well as a section for recommendations for content modification and was used as follows to assess the: 1.NATPGE Content validity: four content experts (CE) who specialised in: specialist-nurse education, psychometric scales; development and analysis of instruments were selected to undertake judgment-quantification and agree on the final version of the NATPGE survey-instrument prior to testing its face validity. 2.NATPGE Face validity: a convenience sample of 25 Registered Nurses (RNs) were selected from four major Queensland tertiary hospitals to assess the instrument content readability and relevance. II. Test-retest reliability: a random sample of 100 RNs from the Nurses and Midwives e-Cohort Study (NMeS)2 were invited to participate in a test-retest pilot as part of the process of assessing the reliability of the online NATPGE. To gauge the test-retest reliability, the instrument was administered at two different time points, 3 weeks apart, under similar conditions. III. Demographic variables were collected on all participants. Analysis: The content and face validity was assessed using descriptive statistics. For the test-retest reliability the 15 NATPGE questions were analysed on an item by item basis to calculate the intra-rater reliability using the weighted kappa (kw) statistic and its standard error (SE). The kw implicitly assumes that all disagreements are equally weighted as are all agreements. The reference values for the strength of agreement are in accordance with Altman3 (0.0- 0.2 as poor, 0.21-0.40 as fair, 0.41-0.60 as moderate, 0.61-0.80 as good and 0.81-1.00 as a very good agreement. Data were analysed using Stata 12 (StataCorp. 2011, TX: StataCorp LP.). Results: Demographics of participants: The four CE and the RNs were similar in terms of their years of experience, postgraduate qualification and working in a specialty area. Content and Face Validity: Overall both the CE and the RNs ranked the NATPGE, using the CVI, as a realistic training platform that would be useful for evaluating RNs’ attitudes towards postgraduate education (Table 2). The comments received from the CE resulted in some minor changes to the wording of some items for better clarity and simplicity. No particular concerns were raised about any of the items by the CE. The CE was agreeable that the items were arranged in a positively and negatively worded sequence, which was intentional as to prevent response bias. Test-retest reliability: Complete data is available and was analysed for 36 of the 100 (36%) sample of RNs who completed the test-retest reliability of the NATPGE instrument. Overall the results display an 80% fair to moderate kappa (kw = 0.29-0.57) agreement; however, there is some variability (kw = 0.0 to 0.79) between the test and retest kw for each individual question. Conclusion: The present research indicates very good content and face validity and whilst the test-retest reliability overall was moderate, several individual questions did have poor kappa values. As such, we plan to refine the instrument, before its validation in a larger sample using factor analysis. This work is currently being undertaken. |