Jump to content

abatty

Members
  • Content count

    14
  • Joined

  • Last visited

Community Reputation

0 Neutral

About abatty

  • Rank
    Member

Profile Information

  • Gender
    Male
  1. Another suggestion for you folks. When exporting durations from audio/video coding to Excel, it just comes out as text. This means the user then gets to do battle with Excel to get it to realize that they are duration timecode. It would be great if NVivo did that already with the exports, setting the cells to the format: [h]:mm:ss.0 Dealing with durations in Excel is notoriously headache-inducing; if NVivo could take the first step during export, sales of ibuprofen would plummet.
  2. Here is how I ultimately solved this: At the outset, I was having trouble understanding the difference between classifications and nodes, so I doubled up anything I could, which ended up saving me here. I did a Matrix Query with the Participants in the rows. For the columns I did: Task Types, which were node classification attributes Node for Output (it's binary, so even though that information is in two nodes, I only need one, and I'll get a count of 1 or 0) The Behavior nodes I then set the sources only to the videos of the task in question. This was facilitated by moving them into their own folders in the source list. I was then able to do an export to Excel for each source set (e.g., all the Task01 videos, then all the Task02 videos...) by running the query over and over. I also exported the durations. Then it was a rather simple matter of sewing them all together in Excel. I do realize I'm torturing NVivo a bit here, and that this isn't quite the kind of analysis it was designed for, but nothing else would have made coding these videos as (relatively) painless as NVivo. The interview data is much easier to work with, because it's much closer to what it's designed for.
  3. That's a pretty neat workaround. I'll give it a shot.
  4. Video Coding UI Suggestions

    I'm always happy to provide more detail... about anything, really. Thanks for reading!
  5. Hello again. I am trying to get a full data table for my respondents for some of the things I've coded them on. I need more detail than I can get from the Matrix Query function (I think), and the Extract function doesn't quite seem to fit. I have my participants as nodes. Each participant did 6 tasks; these are coded as the name of the source video, an attribute on the video, and also a node coding the video (I did it all three ways because I wasn't sure which I'd need) Each task has a binary output (Yes/No); this is available as an attribute on the source, and also as a node. After that, I have a series of Behaviors that I'd like counts and/or durations of coding Ideally, I'd have an Excel table that looks like this: ParticipantName | Task | Output | Behavior01 References | Behavior02 References | Behavior03 References | ... --With a line for each task, so each participant would have 6 lines. Is this possible? Thanks! Aaron
  6. Simon: Thanks again for your super-quick response! Ahah! I read the matix query page a few times and never followed this link that explains that: http://help-nv10.qsrinternational.com/desktop/procedures/work_with_the_content_of_a_node_matrix.htm However, when I try either the row or column percentages, everything goes to 0%. I guess what I'd really like is something where I had a row and a column devoted to "other" for the times when "Happy" was not coded with any of the behaviors, and "Smiling" was not coded with any of the moods. And then I'd like marginal totals/percentages. That would allow me to run some basic stats on these things. Durations would be ideal.
  7. I am trying to do something that I'm sure the data is in there to do, but I'm not sure how to get NVivo to do it, or how to get the data out to run the calculations elsewhere. I have a bunch of video that is coded with overlapping codes (e.g., mood of the scene and viewer behavior). I would like to figure out the amount of overlap between viewer behaviors and the mood of the scene. I can do a matrix coding query with mood on one axis and behavior on the other, and that gives me counts, but it doesn't do percentages, which are more informative, since neither the moods nor behaviors are equally represented. It also does not seem to take into account the totals. So, for example, let's say I had "Happy" as the mood of a scene, and "Smiling" as a viewer behavior. I'd like to know the percentage of times in the video sources in question that regions coded "Happy" were also coded "Smiling." I'd also like to know the percentage of times that regions coded "Smiling" were also coded "Happy." It'd be even better if it could report percent coverage, since we're talking about timespans of video here, and, for example, perhaps 2 minutes of the video were "Happy," but the respondent was only "Smiling" for 15 seconds of that, so there would be 12.5% shared coverage in that instance. I am much more interested in numbers than words (for this part of the project). I imagine I could spit out a series of Excel tables and form them into cases that could be analyzed in a stats package, and I'm sure that's where I'm headed (my comfort zone), but I'd like to get NVivo to do as much of the data formatting as possible! I know things like percent coverage are in there, because I am getting what I want with queries between source attributes and coding, but for a lot of the sexier things I'm researching, I need to be going coding vs. coding. Hope someone has some good ideas... Thanks much Aaron
  8. Alright, I sent it your way. Thanks for taking a look.
  9. Hey folks; one more issue with transcripts. I have imported my Japanese transcripts from plaintext (tab-delimited) files, and they are all in quite nicely, but the default font size seems to be 8pt, which on a high-resolution monitor is tough to read if it's Roman characters, but Japanese kanji are nigh on painful. I can turn on editing and highlight and change the size line by line, but there are hundreds of lines in 12 files. I've tried the zoom button, but that only seems to apply to the timeline. I've also searched the options hoping that one of the presets would just set all these to something else, but no luck there, either. Hoping there's a quick/easy way to do all the content fields at once... Thanks! Aaron
  10. Simon: I'd be happy to; where can I send it? Aaron
  11. Hi folks. I'm having a lot of trouble with importing my Japanese transcripts. I'm just getting garbled text. NVivo 10 32-bit Windows NVivo UI English Project text content set to Japanese Timestamped transcript from a Japanese transcription service Tab-delimited text file encoding is UTF-8, although I've tried switching it to Shift-JIS as well Word (.docx) file doesn't come in at all; reports a parser error Manually typing Japanese into the transcript fields works fine I have some classification attributes in Japanese with no problem. I just can't get this (critical and expensive) transcriptions in. I hope someone has some ideas; I'm pretty sunk if I have to listen to all of these and code the audio instead of just reading the transcript... UPDATE: I should have mentioned that I was trying to get one row per timestamp. I just tried by going one row per tab-delimited line, and then the text shows up fine, but complains about the timespan format, which is frustrating, since it is in the format described here: http://help-nv10.qsrinternational.com/desktop/procedures/import_audio_or_video_transcripts.htm Cannot mix timestamp with timespan, maybe? I'd really like it to work with one line per timestamp, though... UPDATE 2: I have gotten past this problem, practically speaking, but it required a lot of Excel to put timespans on every line. Something is broken with the text encoding reading on the "one line per timestamp" option.
  12. Yeah, it's there; the UI is just really, really unintuitive.
  13. Video Coding UI Suggestions

    A couple other things I forgot to mention, but remembered as I sat down to do some more coding: When clicking in the timeline, the play head should jump to that location. This wouldn't work now that regions are defined by clicking/dragging in the timeline, but if that were moved to a different "stripe" above or below the waveform, then it would be no problem. Clicking and dragging to define regions should drag the playhead with it so you can see what you're getting. Big feature request: Audio should scrub when moving the playhead or selecting regions.
  14. Hey folks. I'm a new Nvivo user, but I've been around it for several years and knew what it could do before using it for my latest research project. I'll be blunt: The video coding UI is infuriating, but not just because all aspects of working with video are infuriating, and even the best UI for video is a pain, but because it could be improved in Nvivo with just some cosmetic tweaks that I doubt would even require much difficult coding on your end. First, the easy one: The video playhead/shuttle is on the same line as where one selects regions for coding. This makes it very hard to indicate whether you intend to define a region or move the playhead. To fix this, basically, Nvivo should just rip a page out of the audio/video editing application playbook and add a new, slim timeline above the waveform display that is only for defining regions, separating it from the playhead. This relatively small UI tweak alone would save my wife and neighbors both at home and work from all manners of shrieks and cursewords as I try to define a region but move the playhead instead, or try to move the playhead and define a region instead. The playhead itself, I think, should also be moved. Rather than a little blue bubble, it should be a pentagon below the waveform, as it is in many audio or video edition applications. It should not obscure the waveform, as it does now. Now, a little harder: Oh dear lord, the playback controls. I wonder what was ingested the night those were dreamt up. When every other audio or video application in the world uses spacebar to play/pause, moving that to F4 is more than a little crazy. I imagine it was done to avoid collision with transcription, but I can't imagine that it would be too hard to determine whether the user was in the transcription field or the video player/timeline/coding stripes to enable/disable that behavior. Similarly, the begin/end region keys are somewhat awkward as well. It seems that they should be moved to regular character keys, such as comma and period. Finally, a feature request: Dragging coded regions to nodes or using right-click is fine with material that only has a few codes, but if almost every second is coded, then it becomes very laborious. What would make this process much smoother is if the user could map certain keys to frequently-used nodes, and code by pressing the appropriate key(s) to define the region and code it simultaneously, with the release of that key ending the region. Failing that, just being able to assign key command shortcuts to oft-used nodes (like key commands for styles in MS Office) would go a long way to sparing users RSI from mouse clicking and dragging. The End. Thanks for reading.
×