← Back to The Index Final Review Submission #6
Final Review

Review Nº 06

AI Adoption in the Workplace: A Scoping Review on Benefits, Risks, Mitigations, and Readiness Frameworks
AuthorsEdoardo Lamantea, Elia Mandrelli, Ali Adham, Cyril de Wandeleer, Pablo Panariti, Andrea Vozza, Alberto Benedicenti, Emanuele Dulla, Sofia Pesare, Alessandro Gabriele Solomon and Gemma Stumpo
30/30
Score
The revision delivers a comprehensive overhaul that directly addresses every consensus point from the first round: full PRISMA-ScR flowchart, explicit Boolean search strings and eligibility criteria, complete AI disclosure with source-verification language, corpus expansion from 18 to 48 sources, a proper conclusion section, and substantially strengthened design-specific evidence on deskilling, while preserving the framework's coherence and analytical clarity throughout.
Perfect Score

The Pros

9 Items
+
Methodology section is now fully transparent: databases, date range, Boolean queries (Table 1), and inclusion/exclusion criteria are all explicit (resolves R1, R2, R3, R4, R5, R9).
+
Figure 1 replaces the textual funnel with a proper PRISMA-ScR flow chart — a direct fix to R3 and R1.
+
AI disclosure now contains the verification language reviewers asked for (R3, R4): tool scope, source verification by humans, and explicit confirmation that no citations were fabricated.
+
Corpus expanded from 18 to 48 sources, with a clear breakdown by evidence type (empirical / policy / theoretical), addressing R7 directly and R6/R9 indirectly.
+
Conclusion section (5.2) is now present, fixing R2's concern that the paper ended on gaps.
+
Table 2 cleanly maps themes -> factors -> evidence strength -> sources, and an evidence-strength rubric is now provided, resolving R4's request to clarify evidence-strength criteria.
+
The use case is more tightly evidenced: design-specific sources ([19], [20], [32], [38]) replace the previously unsupported quantitative claims, resolving R1's "50-100 variants" criticism.
+
Deskilling discussion is substantially strengthened with the Einstellung effect (Hou et al. [32]) and cognitive-offloading evidence (Shukla et al. [20]), responding to R6, R7, R8, and R9.
+
Contextual Integrity overlay (Section 3.5) makes the framework's transferability claim conceptually explicit by mapping themes onto roles/activities/norms/values.

The Cons

6 Items
-
Cross-industry transferability is argued conceptually rather than demonstrated through a second worked use case (R6 had suggested predictive maintenance).
Geographic bias is acknowledged in Section 5.1 as a limitation but the corpus itself remains predominantly Western-sourced.
Measurable readiness and deskilling metrics are described qualitatively; reviewers (R6, R7, R8) had encouraged moving toward quantifiable indicators.
The coding matrix is linked as a supplementary archive rather than included as an inline excerpt; R1 and R3 had asked for a visible extraction table with explicit "Context" and "Constraint" columns.
No risk-readiness matrix or factor-interaction visualization was added beyond the PRISMA chart and publication-year histogram.

Suggested Changes

12 Pointers
01
High
Location
Section 4 (Worked Use Case)
Issue
Only one domain (packaging design) is demonstrated, leaving cross-industry transferability asserted but not shown — flagged by R4, R6, R8.
Suggested Fix
Add a second, shorter worked use case in a contrasting domain (e.g., industrial predictive maintenance as R6 suggested, or customer-support triage), even as a half-page comparative table mapping how each T1-T4 factor manifests differently.
02
High
Location
Section 3.4 (Organizational Readiness)
Issue
Readiness factors are described qualitatively; reviewers (R6, R7) asked for measurable readiness metrics.
Suggested Fix
Introduce a small "indicator catalogue" table proposing 1-2 measurable proxies per readiness factor (e.g., AI literacy: % staff completing certified training; data quality: % of datasets with documented lineage).
03
High
Location
Section 3.2 / Section 5.1 (Risks and Gaps)
Issue
Deskilling evidence is now stronger but still cross-sectional, and no measurable indicators are offered (R6, R7, R8).
Suggested Fix
Add a short paragraph proposing concrete skill-erosion metrics (e.g., independent-output-without-AI benchmarks, periodic blinded portfolio reviews, decay rate of unaided task performance) and explicitly flag which would suit longitudinal designs.
04
Medium
Location
Section 5.1 (Gaps and Future Work)
Issue
Western-centric bias is named as a limitation but the actual review contains no non-Western sources, leaving R6, R7, R9 only formally addressed.
Suggested Fix
Add at least 2-3 non-Western or comparative sources (e.g., Asian, African, Latin American labour-AI studies) into the relevant T2/T4 sections, even if as contrasting evidence, and reflect this in Table 2's "Key Sources" column.
05
Medium
Location
Section 2.4 (Data Charting) / new Appendix
Issue
Coding matrix is referenced as supplementary ZIP only; R1 and R3 requested a visible extraction table with explicit "Context" and "Constraint" columns.
Suggested Fix
Inline a condensed coding-matrix excerpt (10-15 illustrative rows) in the appendix with columns: Source, CI dimension, Industry, Factor, Context, Constraint, Evidence type.
06
Medium
Location
Section 3 (Results) — between 3.4 and 3.5
Issue
The four themes are described as interdependent but the interdependence is not visualized; R1, R3, R6 asked for a gap-map or risk-readiness matrix.
Suggested Fix
Insert a 2x2 (or 4x4) matrix figure mapping readiness levels against risk-materialization probability, or a Sankey-style diagram linking Risks to Mitigations to Readiness factors.
07
Medium
Location
Section 3.3 (Mitigation Strategies) and Section 3.4
Issue
The augmentation-vs-automation distinction (R6) and the accountability-trust link (R8) are treated implicitly.
Suggested Fix
Add a labelled paragraph in 3.3 distinguishing augmentation- and automation-oriented mitigations, and a sentence in 3.2/3.3 explicitly connecting accountability diffusion to documented trust-erosion findings.
08
Medium
Location
Section 2.2 (Eligibility Criteria)
Issue
The exclusion of "papers with zero citations (unless published within the last 18 months)" is methodologically debatable for a fast-moving field.
Suggested Fix
Either justify this rule with one sentence (e.g., proxy for unscrutinised work), or relax it for sources younger than 24 months, and note how many of the 780 zero-citation exclusions might have been borderline.
09
Low
Location
Figure 2 caption
Issue
Caption refers to "highlighted in blue" 2024-2025 bars but the figure as rendered shows no clear colour distinction, which may confuse readers.
Suggested Fix
Either add the colour-highlighting in the figure file or rewrite the caption to describe the trend without referring to colour.
10
Low
Location
Table 2, "Evidence" column
Issue
Evidence-strength labels (Strong / Moderate / Emerging) are now defined in the rubric below the table, but the rubric is on a separate page from the table.
Suggested Fix
Move the evidence-strength rubric directly under Table 2 (or into the table footer) so reviewers can apply it without page-flipping.
11
Low
Location
Section 1 (Introduction)
Issue
Framing does not commit to which decision-makers should use the framework (HR, legal, line managers, executives).
Suggested Fix
Add one sentence specifying the intended user(s) of the framework and the decision moment it supports (pre-procurement vs deployment design vs post-deployment review).
12
Medium
Location
References [33], [36], [40]
Issue
These sources appear in the bibliography but are not visibly cited in the body text, which weakens the "48 sources synthesized" claim.
Suggested Fix
Either integrate them into the relevant T1/T2 paragraphs (Doshi & Hauser [36] is highly relevant to creative-diversity erosion) or remove them from the corpus count.
Back to The Index
You aced it · 30/30
Read · React · Revise
Pros / Cons / Pointers
Final Review · Submission #6 The Index Grandi Sfide · 2026