Jump to content

j.innes

Members
  • Content count

    24
  • Joined

  • Last visited

  • Days Won

    14

Everything posted by j.innes

  1. Hi all, I have revived an old thread in Nvivo9/10 forum about query timeout. I have gratefully received a response from QSR helpdesk about this topic and so am sharing this in case anyone else finds it useful: Firstly, please follow the steps below to perform a Compact and Repair of your project: 1. If your project is located on a network drive, copy it to your local machine. 2. If the project is currently open, close the project via File > Close. 3. Click the File tab > Help > Compact and Repair Project. 4. At the Select Project dialog box, locate and select the project - select NVivo Projects (*.nvp) to repair a standalone project. 5. Click Open. If the issue persists, you may need to change the SQL Time out period manually by editing your configuration file. This may be a little technical. You can ask your IT support personnel to help you with this if you are unable to do this yourself. The steps are below: o Ensure NVivo 11 is closed. o Go to Start -> All Programs -> Accessories -> Wordpad o Click on File -> Open. Ensure "All Documents (*.*) is selected in the 'Files of Type' dropdown menu in the Open dialog box. 1. With NVivo 11 closed, open Windows Explorer a) Right-click on the Windows Start circle -> Explore. 2. Display all hidden files and folders: a) Click on Organize -> Folder and Search Options On the 'Folder Options' dialog, click on the View tab and check 'Show hidden files and folders' option c) Click on the Apply button and click OK. 2. Navigate to the following folders: C:\Users\[username]\AppData\Roaming\QSR_International1 o You should see few folders that start with NVivo.exe..... Go into the first folder and see if it has subfolder named "11.1.0.411"2. If not, go back one level and go into the second NVivo.exe.. folder to see if it has a 11.1.0.4112 subfolder One of the two folders will definitely have a subfolder named "11.1.0.411"2. Go into this subfolder (11.1.0.411)2. You should see a file called user.config, double click on it to open it.(Open it with Notepad) o The 19th3 line from top in this file (or one or two lines above or below) should have something that says SqlCommandTimeout setting. The next line should have a value of 60. Change this value to a higher number like 120 or 180. o Open NVivo 11 and re-run your query. If you still get the same error. Try increasing the timeout value to a higher number. Please note changing the sqltimeout value may result in the Cancel button (when you run a query) to become unresponsive. Lastly, if you are using 32-bit NVivo 11, it will also help to switch to 64-bit version. Please refer to the link below to find whether you can install 64-bit, http://www.qsrinternational.com/support_faqs_detail.aspx?view=1189 1. Rather than going through all those steps, I found it easier to type %appdata% into my Windows Explorer and choose the right version of Nvivo from there. 2. Just to point out that this number is the version of Nvivo you're running. Your version may be different, so check this first. You can do that by going to 'File', 'Help', 'About Nvivo'. 3. It was not the 19th line for me. I just used CTRL+F to find the SqlCommandTimeout setting.
  2. Word trees

    Hi everyone, I'm trying to spice up my report with some word trees. My project contains 99 documents. When I try to visualise the text search query with a tree word, it gives me all the words that come before/after the searched word. Is there a way of limiting the number of results/branches? For example: I'm searching for the word 'education', is there a way of displaying only those words that appear at least ten times before the word 'education' and at least ten times after the word 'education'? I've attached a jpeg of a word tree of one document, there is not a chance I can input this into a report as it is. Ideally, I'd like to be able to show that the words 'higher' and 'further' do appear before the word 'education', and that the words 'institutions' and 'colleges' appear after the word 'education'. With thanks, Julia
  3. Have you considered running a case/node matrix query? This would present you with a table of each case down, and which nodes have been coded to each case. If you want to do the matrix for only a particular set of cases, you can pick and choose by choosing which items you are interested in. More info here http://help-nv11.qsrinternational.com/desktop/procedures/run_a_matrix_coding_query.htm
  4. Hi folks, Sadly I think I get to wear the dunce hat for the week. I'm running interviews as part of my project. Because it is on a sensitive topic I have committed to transferring each interview onto my PC as soon as I can, and deleting it off the Dictaphone. It's encrypted, but it's harder to steal a desktop than it is a Dictaphone. I hope you can see where this is going and why this is a problem. I imported today's interviews into Nvivo11: clicked on Data, Audios, and then imported directly from the Dictaphone connected to my PC. Looking through the project log, I imported to internals. When I imported, I double checked that everything worked - the file played fine and displayed the correct metadata. With glee, I disconnected the Dictaphone and deleted the interviews. Now six hours later, I have reopened my project only to be met with a 'could not open media file for this source' error message whenever I try to click on any of the interviews in Nvivo. Obviously, the originals have been deleted and dunce here has not backed up the interviews on the PC (why would I? Nvivo compresses other data very neatly!) The interview metadata still exists in Nvivo - it tells me the length and the format of each file, just no actual data. Is there a chance I can retrieve the lost data? If not, how can it be made more obvious in Nvivo that data is not stored correctly? And also, let this be a lesson to all - backup your files: Nvivo does not store everything. With thanks, Julia
  5. I think it would have been handy to know that upfront. As Nvivo stores all other files, this is a pretty esoteric piece of knowledge. Anything you can do to make this more obvious for other users?
  6. I have posted a solution to this in the Nvivo11 forum.
  7. Hello, Sorry to revive an old thread. Do you know if this works on Nvivo11 too? I can find the appdata files for Nvivo10, but there's no such line of code in the .config files (I can't find an isolated.config file either). Or is there a more elegant fix in Nvivo11 now? With thanks, Julia
  8. I'm trying to automate adding classifications in my dataset. I have gone through the steps outlined in the below in the 'Classify nodes from values in a dataset' section http://help-nv10.qsrinternational.com/desktop/procedures/classify_nodes_(set_attribute_values_to_record_information).htm? When I go to 'Classify Nodes from Dataset' I get the attached error message, 'Before classifying, you need to add at least one classification to your project'. My dataset does have classifying fields uploaded, and I have autocoded each row for each respondent. What else am I missing? With thanks, Julia
  9. That was indeed the missing step. Thanks marcioandrei!
  10. Hi folks, I'm trying to do a fairly simple text search within a body of documents. There are a total of 38 documents, with about 920k words between them all. If I try to do a query for the word 'further' I receive no results. However, if I do CTRL+F within a document I find the word 'further'. I'm trying to find out how often the phrase 'further education' pops up in these documents (along with a few others). Any idea of how I can get around this discrepancy, or why the query tool is not picking up this word? Many thanks in advance! Julia
  11. Incidentally, I've tried copy and pasting the word 'further' from the original text into the query function. Sadly no joy.
  12. At the very least it would be helpful to have an answer to say that this is not possible. Incidentally, I am now going for plan B. I have exported the list of my documents and nodes as a matrix. I will merge this dataset with the classification spreadsheet in SAS and take it from there. I would have though this is something that Nvivo should be able to do for you, providing context to your data, but it looks like this is stuff you need to do manually. Quite brutal for 650 articles!
  13. Hi Joy, I'm not sure if Nvivo has this functionality. However, if you have a list of post-codes there's no reason why you can't use google fusion tables.
  14. I'm interested in doing this too! I was operating under the assumptions that once I did the coding of data, I could then overlay the attributes on top by importing the classification sheet. I have really struggled with the actual process. The above seems like a good idea, but if there's a neater way then that would be even better. My project involves a set of documents. The information about these documents (such as date published, author, journal, etc) are saved in a classification spreadsheet. The documents themselves have been coded for themes, now to spot patterns I'd like to connect each document to its relevant author and date. I've attempting importing the spreadsheet both as a node and source classification sheet. This process does not actually connect the documents with the attributes (or at least the way I have done it). The above suggestions seems sensible, but how to make Nvivo recognise that the title of the document in Nvivo is a title of the document in the classification sheet?
  15. I'm doing a broad internet based research project. It goes something along the lines of - entering a few terms into a search engine and then analysing the content of the search. I'm quite aghast at how much work is involved in the initial stage of data collection. I have to document the searched results and why they would be picked for research or discarded. What I'd like to know is whether there's an elegant way of automating the compilation of the dataset using NVivo (like you can with twitter). Otherwise what I've done in the past (with more focused searches) is to use Excel for the initial stage of documenting and sifting through data, then NCapturing the relevant sources (from Excel links) to import into NVivo. This has worked fine for projects of up to 500 articles, but I'm looking at 500,000+ this time. Any pointers gratefully received - even if it's a case of using another software programme to do the documenting and sifting of data, and then using that to open only the relevant links to import only the relevant articles using NCapture. With thanks, Julia
  16. Web search

    D'oh! Thanks for the support, Simon. I'll see if I can fox something up out of other software.
  17. Hello, I'm afraid I'm once again banging my head against my desktop. I'll try to explain as best I can what I'm trying to do, but if anything's unclear please ask below for further details (the answer may be in the details, after all). I have a classification of 'Value' where goods talked about can have either one of the following three attributes: 'High value', 'Medium value', and 'Low value' (not named 'budget' for consistency). I've already created a node matrix to see if any of the nodes are strongly correlated within the text. So for example, I have learnt that the node of 'staff costs' is strongly correlated with 'cost reduction' overall. I would now like to do this for each attribute (by each value of good) to see if, for example, high value goods have the 'marketing' node and the 'increased competition' node more strongly correlated than the low value goods, which may have 'market opportunity' strongly correlated with 'marketing'. Or even better, I'd like to see whether the 'staff costs' correlation with 'cost reduction' is only present in low value goods, and not in high or medium value goods. I've made the relationships up, but this is the kind of process I'm looking for Nvivo to do. Any hints? Thanks again! Julia
  18. Hi Bhupesh, thank you for your response. I'm afraid this isn't quite what I'm after (unless I'm misunderstanding the terminology). I'd still like to create a node matrix to see how the different coded segments relate to each other. However, I would like to create matrices conditional on each attribute to see if the nodes have different relationships for a defined value of goods.
  19. Hello, I have autocoded my documents via text query. I am now trying to sift through the node to make sure I only include data that's relevant, but I can't seem to uncode some bits of it. I have no idea how to get around this. Is there a solution? I've got an image below to show what I mean.
  20. Ah, I see. Thanks! Could you please explain what a compound query is?
  21. Hiya, I'm having the same issue. I've looked at the suggested link but I'm afraid that's not very useful if you don't know the correct syntax. Could you please provide an example of combining wildcard search with the 'near' operator? Many thanks, Julia
×